AU2006332582A1 - Modeling the three-dimensional shape of an object by shading of a two-dimensional image - Google Patents

Modeling the three-dimensional shape of an object by shading of a two-dimensional image Download PDF

Info

Publication number
AU2006332582A1
AU2006332582A1 AU2006332582A AU2006332582A AU2006332582A1 AU 2006332582 A1 AU2006332582 A1 AU 2006332582A1 AU 2006332582 A AU2006332582 A AU 2006332582A AU 2006332582 A AU2006332582 A AU 2006332582A AU 2006332582 A1 AU2006332582 A1 AU 2006332582A1
Authority
AU
Australia
Prior art keywords
shading
model
subdivision
image
updated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2006332582A
Inventor
Jennifer Courter
Rolf Herken
Tom-Michael Thamm
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nvidia ARC GmbH
Original Assignee
Mental Images GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mental Images GmbH filed Critical Mental Images GmbH
Publication of AU2006332582A1 publication Critical patent/AU2006332582A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/507Depth or shape recovery from shading

Description

WO 2007/079361 PCT/US2006/062405 MODELING THE THREE-DIMENSIONAL SHAPE OF AN OBJECT BY SHADING OF A TWO-DIMENSIONAL IMAGE Cross-Reference to Related. Applications This application is a continuationin-part of commonly owned, co-pending U.S. Patent Application Serial No. 10/795,704 (Attornmey Docket MENT-003D I). filed on March 5, 2004; which is a divisional of US. Patent Application No. 09/027,175 (Attorney Docket MENT-003), filed on Feb. 20, 1998 (now U.S Patent No. 6,724,383); which claims the priority benefit of U-S. Provisional Application for Patent Serial No 60/038,888 (Attorney Docket MENT-003-PR), filed on Feb. 21, 1997; all three applications being incorporated herein by reference. This application for U.S. Patent also claims the priority benefit of U.S. Provisional Patent Application Serial No. 60/752,230 (Attormey Docket MNTL-106-PR), filed December 20, 2005; and U.S. Provisional Patent Application Setal No. 601823,464 (Attorney Docket MENT-089-B PR), filed Aug. 24, 2006; these two applications also being incorporated herein by reference. Also incorporated herein by reference are commonly owned, co-pending US. Patent Application Serial No. 09/852,906 (MENT-060), filed May 9, 2001 now allowed; and U.S. Patent Application Serial No. 10/062,192 (MENT-062), filed Feb. 1, 2002, now allowed. Field of the Invention The present invention relates to the field of computer graphics, computer-aided geometric design and the like, and in particular to improved systems and techniques for modeling the three dimensional shape of an object by shading of a two-dimensional image. Background of thej ive tion In computer graphics, comnputer-aided geometric design and the like, an artist, draftsman or the like (generally referred to herein as an "operator"), attempts to generate a three dimensional model of an object, as maintained by a computer, from lines defining two dimensional views of objects. Conventionally, computergraphical arrangements generate a three-dimensional model from, for example, various two-dimensional line drawings comprising contours and/or cross-sections of the object and by applying a number of operations to such lines which will result in two-dimensional surfaces in three-dimensional space, and subsequent modification of parameters and control points of such surfiaces to correct or otherwise mnodify the shape of the resulting model of the object, After a three-dimensional model for the object has been gener ated, it may be viewed or displayed in any of a number of orientations. 1 WO 2007/079361 PCT/US2006/062405 In a field of artificial intelligence commonly referred to as robot vision or machine vision (which will generally be referred to herein as "machine vision"), a methodology preferred to as "shape from shading" is used to generate a three-dimensional model of an existing object from one or more two-dimensional images of the object as recorded by a camera. Generally, in machine vision, ithe type of the object recorded on the image(s) is initially unknown by tihe machine, and the model of the object that is generated is generally used, for example, to facilitate identification of the type of the object depicted on the image(s) by the machine or another device. In the shape from shading methodology, the object to be modeled is illuminated by a light source, and a camera, such as a photographic or video camera, is used to record the image(s) from 10 which the object will be modeled. It is assumed that the orientation of a light source, the camera position and the image plane relative to the object are known. i addition, it is assumed at the reflectance properties of the surface of the object are also known. It is further assumed that an orthogmphic projection technique is used to project the surface ofthie object onto the image plane, that is, it is assumed that ai implicit camera that is recording the image on the image plane has a 15 focal length of infinity. The image plane represents the x, y coordinate axes (that is, any point on the image plane can be identified by coordinates (x, y), and the axis is thus nonnal to the image plane; as a result, any point on the surface of the object that can be projected onto the image plane can be represented by the coordinates (x, y, z). The image of the object as projected onto the image plane is represented by an image irradiance fiction Iv, y) over a two-dimensional 20 domain Q c R, while the shape of the object is given by a height function z(x, y) over the domain P. The image irradiance function I(x, y) represents die brightness of the object at each point (, y ) in the image. In the shape firm shading methodology, given I(x, y.) for all points (x, y) in the domain., the shape of an object. given by z (x, ), is determined 25 It would be desirable to provide improved methods and systems for generating a three dimensional model of an object by shading as applied to a two-dimensional image of an object. Sumanray of the Invento The present invention provides improved methods and systems for generating a three 30 dimensional model of an object by shading. One aspect of the in mention provides improvements to the shape-by-shading (SBS) systems and methods described in comonly owned U.S. Patent No. 6.724. 83. Another aspect of the invention relates to particular shaping techniques, methods and algorithms that can be implemented in ashape-by-shading (SBS) modeler in accordance with the 2 WO 2007/079361 PCT/US2006/062405 invention, and more particularly, methods and algorithms that advantageously exploit trust-region models and methods Brief Description of the Drawines This invention is pointed out with particularity in the appended claims. The above iand further advantages of this invention may be better understood by referring to the following description taken in conjunction with the accompanying dmwiugs in which FIGS. 1-4 aret a series of diagrams illustrating components of anl exemplary digital processing environment in hich aspects of the present invention can be deployed. 10 FIG, 5 depicts a computer graphics system for generating a three-dimensional model of an object by shading as applied by an operator or the like to a two-dimensional image of the object in the gi ven state of its creation at any point in time, constructed in accordance with the invention. FIGS. 6-t 10 are a series of diagrams that are useful in understanding the operations 15 performed by the computer graphics system depicted in FIG. 5 in detemining the updating of the model of an object by shading as applied to the two-dimensional image of the object in its given state of creation at any point in time. FIGS. I HA and I B show a flowchart depicting operations perfmnned by the computer graphics system and operator in connection with the invention. 20 FIG. 1 2 shows a diagram illustrating data flow in an SBS system according to the present invention. FIG. 13 shows a screenshot 350 of the SBS NModifier in 3ds max. FIGS. 14 and 15 show pseudo code implementations of SBS techniques according to aspects of the invention. 25 FIGS. 16A and 16B show a table that provides a listing of mnathemnatical notation used in the present description of the invention. FIGS. 17-22 show a series of flowchart of a generalized method and sub-methods according to various aspects of the present invention. 30 Detailed Description of the Invention These and other aspects, embodiments, practices, implementations and examples of the invention are set forth in the following detailed description, which is divided into sections as follows: 3 WO 2007/079361 PCT/US2006/062405 1. Digital Processing Environment in Which Invention Can Be Implemented II. SBS Modeler II. Improvements to SBS Modeler 3.1 Introduction 3.2 SBS Shading aid Shaping Process 33 Surface Handling Improvements 3.4 Additional Shaping Algorithm(s) 3.5 SBS C++ API 10 3.6 A Prototype: SBS Plug-in for 3ds max 3.7 Extensions to SBS IV. Shaping Methods and Algorithms Implemented in the SBS Modeler 4.1 Introduction 4.2 Rasterization 15 4.3 Reduction of Function to Be Minimized 4.4 Trust-Region Newton-CG Method 4.5 Computation of the Trust-Region Model 4.6 Minimization of the Trust-Region Model 4.7 Convergence of Trust-Region Newton-CG Method 20 V Flowcharts of Generalized Methods 1. Digital Throcessing nxiaironment in. Which Invention Can Be Implmented Before describing paricular examples and embodiments of the invention, the following is a discussion, to be read in connection with FIGS. 1-4 of underlying digital processing structures 25 and environents in which the invention may be implemented and practiced. Those skilled in the art will understand that the present invention can be utilized in the generation and synthesis of images, such as for display in a motion picture or other dynamic display. The techniques described herein can be practiced as part of a computer graphics system, in which a pixel value is generated for pixels in an image The pixel value is representative of a 30 point in a scene as recorded on an image plane of a simulated camera. The underlying computer graphics system can be configured to generate the pixel value for an image using a selected methodology, such as that of the present invention. The following detailed description illustrates examples of methods, structures, systems, and computer software products in accordance with these techniques. It will be understood by 35 those skilled in the art that the described methods and systems can be implemented in software, hardware, or a combination of software and hardware., using conventional computer apparatus such as a personal computer (PC) or equivalent device operating in accordance with (or emulating) a conventional operating system such as Microsoft Windows, Linux, or Unix, either in a standalone configuration or across a network. The various processing aspects and means 40 described herein may therefore be implemented in the software and/or hardware elements of a properly configured digital processing device or network of devices. Processing may be 4 WO 2007/079361 PCT/US2006/062405 perfo n ed sequentially or in parallel, and may be implemented using special purpose or re configurable hardware. As an example, FIG. I attached hereto depicts an illustrative computer system 10 that can carry out such computer graphics processes. With reference to FIG. 1, the computer system 10 in S one embodiment includes a processor module 11 and operator interface elements comprising operator input components such as a keyboard 1 2A and/or a mouse 12B (or digitizing tablet or other analogous element(s). generally identified as operator input element(s) 12) and an operator output element such as a video display device 13. The illustrative computer system 10 can be of a conventional stored-progmram computer architecture. The processor module I can include, for 10 example, one or more processor, memory and mass storage devices, such as disk and/or tape storage elements (not separately shown), which perform processing and storage operations in connection with digital data provided thereto. The operator input element(s) 12 can be provided to permit an operator to input inf1oiation for processing. The video display device 13 canm be provided to display output information generated by the processor module 11 on a screen 14 to 15 the operator, including data that the operator may input for processing, intormination that dthe operator may input to control processing, as well as information generated during processing. The processor module I I can generate information for display by the video display device 13 using a so-called "graphical user interface" ("GUl"), in which information for various applications programs is displa',ed using various "windows" 20 Although the computer system 10 is shown as comprising particular components, such as the keyboard 12A anmid mouse 12B for receiving input information from an operator, and a video display device 13 for display ing output information to the operator, it will be appreciated that the computer system 10 may include a variety of components in addition to or instead of those depicted in FIG. L. 25 In addition, the processor module 11 can include one or more network ports, generally identified by relrence numeral 14, which are connected to communication links which connect the computer system 10 in a computer network. The net-work ports enable the computer system 10 to transmit information to, and receive information from, other computer systems and other devices in the network. In a typical network organized according to, for example, the client 30 server paradigm, certain computer systems in the network are designated as servers, which store data and programs (generally, "information") for processing by the othe client computer sy stems thereby to enable the client computer systems to conveniently share the information. A client computer system which needs access to information maintained by a particular server will enable the server to download ithe information to it over ithe network. After processing the data, the 35 client computer system may also return the processed data to the server for storage. In. addition to 5 WO 2007/079361 PCT/US2006/062405 computer systems (including the above-deseribted servers and cliets), a network may also include, for example, printers and facsimile devices. digital audio or video storage and distribution devices, and the like, which may be shared among the various computer systems connected in the network. The communication links interconnecting the computer systems inll the S network may, as is conventional, comprise any convenient informnation-carrying medium, including wires, optical fibers or other media for carrying signals among the computer systems. Computer systems transfer information over the network by means of messages transferred over the communication links, with each message including information and an identifier identifying the device to receive the message, 10 In addition to the computer system 10 shown in the drawi nigs, methods, devices or software products in accordance with the present invention can operate on any of a wide range of conventional computing devices and systems, such as those depicted by way of example in FIG 2 (e.g., network system 100), whether standalone, networked, portable or fixed, including conventional PCs 1 02, laptops 104, handheld or mobile computers 106, or across the Internet or 15 other networks 108, which may in turn include servers 110 and storage 112 In line with conventional computer software and hardware practice, a software application configured in accordance with the invention cam operate within, e.g , a PC 102 like that shown in FIGS. 2 and 3. in which program instructions can be read from ROM or CD ROMN 116 (FIG 3), magnetic disk orother storage 120 and loaded into RAM 114 for execution by CPU 20 118. Data can be input into the system via any known device or means, including a conventional keyboard, scanner, mouse, digitizing tablet, or other elements 103. Those skilled in thie art will understand that the method aspects of the invention described herein can be executed in hardware elements, such as an Application-Specific Integrated Circuit (ASIC) constructed specifically to carr out the processes described herein, using ASIC 25 construction techniques known to ASIC manufacturers. Various forms of ASICs are available from many manufacturers, although currently available ASICs do not provide dithe ftinctions described in this patent application. Such manufacturers include sItel Corporation anid NVIDIA Corporation, both of Santa Clara, California. Thle actual semiconductor elements of a conventional ASIC or equivalent integrated circuit are not part of the present invention, anld will 30 not be discussed in detail herein. Those skilled in the art will also understand that ASICs or other conventional integrated circuit or semiconductor elements can be implemented in such a manner, using the teachings of the present invention as described in greater detail herein, to carry out the methods of the present invention as discussed in greater detail below, and to implement a Shape-by-Shadiig Module 150 within 35 processing system 102, as shown in FIG. 4. In accordance with the following described systems 6 WO 2007/079361 PCT/US2006/062405 and techniques, the Shape-by-Shading Module 150 one or more of the following sub-modules: shading information input module 150a, model generator module 150b, and display output module 150c. The Shape-by-Shading Module 150 may also include other components described herein, generally depicted in box 150d as 'tools/AP/plug-ins. As further shown in FIG. 4, the output of tihe Shape-by-Shading Module 150 may be provided in a number of different forms. including display able images, digitally updated geometric models, subdivision surfaces, and the like, Those skilled in the art will also understand that method aspects of the present invention can be carried out within commercially available digital processing systems, such as workstations 10 and personal computers (PCs), operating under the collective command of the workstation or PC's operating system and a computer program product configured in accordance with the present invention. The term "computer program product" can encompass any set of computer-readable programs instructions encoded on a computer readable medium. A computer readable medium can encompass any form of computer readable element, including, but not limited to, a computer 15 hard disk, computer floppy disk, comrputer-readable flash drive, computer-readable RAM or ROM element. or a v ny other known means of encoding, storing or providing digital information, whether local to or remote from the workstation, PC or other digital processing device or system. Various fonms of ccmputer readable elements and media are well knowNn in the computing arts, and their selection is left to the implementer In each case, the invention is operable to enable a 20 computer system to calculate a pixel value, and the pixel valutie can be used by hardware elements in the computer system, which can be con ventional elements such as graphics cards or display controllers, to generate a display-controlling electronic output. Conventional graphics cards and display controllers are well known in the computing arts, are not necessarily part of the present invention, and their selection can be left to the implemrenter. 25 11. SBS Modeler FIG, 5 depicts a computer graphics systemrn 200 for generating athree-dimensional model of an object by shading as applied by an operator or the like to a tw--dimensional image of the object in the given state of its creation at any point in time, constructed in accordance with the 30 invention. With reference to FIG 5, the computer graphics system includes a processor module 201, one or more operator input devices 202 and one or more display devices 203 The display devices) 203 will typically comprise a frame buffer, video display tenninal or the like, which will display information in textual andor graphical form on a display screen to the operator. The operator input devices 202 for a computer graphics system 200 will typically inclde a pen 204 35 which is typically used in conjunction with a digitizing tablet 205, and a trackball or mouse 7 WO 2007/079361 PCT/US2006/062405 device 206. Generally, the pen 204 and digitizing tablet will be used by the operator in several modes. In one mode, particularly useful in connection with the invention, the pen 204 and digitizing tablet are used to provide updated shading information to the computer graphics system. In other modes, the pen mand digitizing tablet are used by the operator to input conventional S computer graphics information, such as line drawing for, for example, surace trimming and other information, to the computer graphics system 200, thereby to enable the system 200 to perform conventional computer graphics operations. The trackball or mouse device 206 can be used to move a cursor or Xpointer over the screen to particular points in the image at which the operator can provide input with the pen and digitizing tablet. The computer graphics system 200 may also 10 include a keyboard (not shown) which the operator can use to provide textual input to the system 200. The processor module 201 generally includes a processor. which may be in the form of one or more microprocessors a main memory, nid will generally include one a mass storage subsystem including one or more disk storage devices. Ihe memory and disk storage devices will 15 generally store data and programs (collectively, "information") to be processed by the processor. and will store processed data which has been generated by the processor. The processor module includes connections to the operator input device(s) 202 mid the display device(s) 203, nid will receive information input by the operator through the operator input device(s) 202, process the input information, store the processed information in the memory and/or mass storage subsystem. 20 In addition, the processor module can provide video display information, which can form part of the information obta nd from the memory and disk storage device as well as processed data generated thereby, to the display device(s) for display to the operator. 'llThe processor module 201 may also include connections (not shown) to hardcopy output devices such as printers for facilitatiung the generation of hardcopy output, modems and/or network interfaces (also not shown) 25 tor connecting the system 200 to the public telephony system a /or in a computer network for facilitating the transfer of information, and the like. The computer graphics system 200 generates from input provided by the operator, through the pen and digitizing tablet and the mouse, information defining the initial and subsequent shape of a three-dimensional object, which information may be used to generate a 30 two-dimensional image of the corresponding object for display to the operator, thereby to generate a model of the object. The image displayed by the computer graphics system 200 represents the image of the object as illuminated from an illumination direction and as projected onto an image plane, with the object having a spatial position and rotational or ientation relative to the illumination direction and the image plane and a scaling and/or zoom setting as selected by 35 the operator. The initial model used in the model generation process may be one of a plurality of 8 WO 2007/079361 PCT/US2006/062405 default models as provided the computer graphics system itself, such as a model defining a hemi spherical or -ellipsoid shape. Alternatively, the initial model may be provided by the operator by providing , initial shading of at least one pixel of the image plane, using the pen 204 and digitizing tablet 205. If the initial model is provided by the operator, one of the pixels on the S image pre is selected to provide a "refrence" portion of the initial surface fragment for the object the reference initial surface fragment portion having a selected spatial position, rotational orientation and height value with respect to the image plane, and the computer graphics system determines the initial model for the rest of the surface fragment (if any) in relation to shading (if any) applied to other pixels on the image plane. In one embodiment, the reference initial surface 10 fragment portion is selected to be the portion of the surface fragment for which the first pixel on the image plane to which the operator applies shading. In addition, in that embodiment, the reference initial surface fragment portion is determined to be parallel to the image plane, so that a vector normal to the reference initial surtfiacee ftogment portion is orthogonal to the image plane and the reference initial surface fragment portion has a height value as selected by the operator. 15 In any case, computer graphics system will display the image of the initial model, the image defining the shading of the object associated with the initial model as illuminated from the particular illumination direction ard projected onto the image plane. '11The operator, using the mouse and the pen and digitizing tablet, will provide updated shading of the image of the initial object, and/ or extend the object by slhading neighboring areas 20 on the image plane, and the computer graphics system 200 will generate an updated model representing the shape of the object based on the updated shading provided by the operator. In updating the shading, the operator can increase or decrease the amount of shading applied to particular points on the image plane. In addition, the operator, using the mouse or trackball and the pen and digitizing tablet can perfonn conventional computer graphics operations in 25 connection with the image, such as trimming of the surface representation of the object defined by the model. The computer gaphics system 200 can use the updated shading and other computer graphic information provided by the operator to generate the updated model defining the hape of the object, and futher generate from the updated model a two-dimensional image for display to the operator, from respective spatial position(s), rotational orientation(s) and scaling 30 and/or zoom settings as selected by the operator, If the operator determines that the shape of the object as represented by the updated model is satisfactory, he or she cm enable the computer graphics system 200 to store the updated model as defining the shape of the fin al object On the other hand, if the operator determines that the shape of the object as represented by the updated model is not satistactorv- he or she can cooperate with the computer graphics system 200 to 35 further update the shading and other computer graphic information, in the process using three 9 WO 2007/079361 PCT/US2006/062405 di mensional rotation and translation and scaling or zooming as needed. As the shading and other computer graphic information is updated, the computer graphics system 200 updates the model information, which is again used to provide a two-dimensional image of the object, from rotational orientations, translation or spatial position settings, and scale and/or zoom settings as Selected by the operator. 'These operations cax con tinue until thie operator determines that the shape of the object is satisfactory, at which point the computer graphics system 200 will store the updated model information as representing the final object. The detailed operations pertforlmed by the computer graphics system 200 in determining the shape of an object will be described min connection with FIGS, 6-11. With reference to FIG, 6, 10 inl the operations of the computer graphics system 200, it is assumed that the image of the object is projected onto a two-dimensional image plane 220 that is tessellated into pixels 221 (,j) having a predetermined number of rows and columns. The image plane 220 defines an x, y Cartesian plane, with rows extending in the x direction and columns extending i the y direction. Ihe projection of the surface of the object, which is identified in FIG. 6 by reference nmneral 222, 15 that is to be formed is orthogaphic, with the direction of the camera's 'eye" being in the z direction, orthogonal to the x, y image plaue. Each paint on the image plane corresponds to a picture element, or "pixel" represented herein by ®(.;, with i i [1, N] and j [1, M], where N is the maximum number of columns (index / ranging over the columns in the image plane) and M is the maximum number of rows (index ranging over the rows in the image plane). In the 20 illustrative image plane 220 depicted il FIG. 6, the number of columns Nis eight, and the number of rows M is nine. Ifthe display device(s) 203 which are used to depict the image plane 220 to the operator are raster-scan devices, tihe rows may correspond to scan lines used by the device(s) to dispnay the image. Each pixel .; corresponds to a particular point (x, y.) of the coordinate system and M x N identifies the resolution of the image. In addition, the computer graphics 25 system 10 assumes that the object is illuminated by a light source having a direction .L = (x, , z;,), where I. is a vector, and that the surface of the object is Lanbelrtian. Ihe implicit camera. whose image plane is represented by the image plane 220, is assumed to be view the image plane 220 from a direction that is orthogonal to the image plane 220, as is represented by the arrow with the label "CAMERA.' 30 As noted above, the computer graphics system 200 initializes the object with at lea st an infinitesimally small portion of the object to be modeled as the initial model For each pixel (G, the height value z (x, y) defining the height of the portion of the object projected onto the pixel is known, and is defined as a height field H(x, y) as follows: 10 WO 2007/079361 PCT/US2006/062405 Htxy)={z~xt &)Cx. } (2.01) where v (x, y) - Q2 refers to "for all points (x. .) in the domain Q," with the domain Q referring to the image plane 220 Furthermnore, for each pixel T .. , the normal n (x, ) of the portion of the surt'ce of the basic initial object projected thereon is also known and is defined as a normal field 5 N(x, y) as follows: T(,={n y):vzx, )H(xy)} (2.02) In FIG. 6, the normal associated with the surface 222 of thle object projected onto one the pixels of the image plane 220 is represented by tlhe arrow labeled "n." After the computer graphics system 200 displays the image representing the object 10 defined by the initial model, which is displayed to the operator on the display 203 as the image on image plane 220, the operator can begin to modifyN , the image by updating the shading the image using the pen 204 and digitizing tablet 205 (FIG. 5). It will be appreciated that the image of the initial model as displayed by the computer graphics system will itself be shaded to represent the shape of the object as defined by the initial model, as illuminated from the predetermined 15 illumination direction and as projected onto the image plane Each pixel @ .on the inage plane will have an associated intensity value I(x, y) (which is also referred to herein as a "pixel value) which represents the relative brightness of the image at the pixel ,., and which, inversely, represents the relative shading of the pixel. If the initial pixel value for each pixel , is given by l (x, y.), which represents thle image intensity value or brightness of the respective pixel 0, -at 20 location (x: ). on the image plane 220, and the pixel value after shading is represented by (,.y), then the operator preferably updates the shading for the image such that, for each pixel I (x y) lxy) c<c, for (xVy)CD (2.03) where _ (e 0) is a predetermined bound value selected so that, if Equation (2.03) is satisfied for 25 each pixel, the shape of the object can be updated based on the shading provided by the operator. After the operator updates the shading fbr a pixel, the computer graphics system 10 will perform two general operations in generation of the updated shape for the object. In particular, the computer graphics system 200 will (i) first determine, for each pixel i. D whose shading is updated, a respective new 30 normal vector n~ (x. .); and (ii) after generating an updated normal vector n (x, y, determine a new height value (x, )) 11 WO 2007/079361 PCT/US2006/062405 The computer graphics system 10 will perffomnn these operations (i) and (ii) for each pixel @.0 whose shading is updated, as the shading is updated, thereby to provide a new normal vector field N(x y) and height field H(x, j). Operations performed by the computer graphics system 200 in connection with updating of the normal vector n, (item (i) above) for a pixel @ .. will be 5 described in connection with FIGS. 7 and 8, and operations performed in connection with updating of the height value (x, y) (item (ii) above) for the pixel ),,; will be described in connection with FIGS. 9 and 10. With reference initially to FIG. 7, that figure depicts a portion of the object, identified by reference mmeral 230, after a pixel's shading has been updated by the operator. In the ftlowimg 10 it will be assumed that the updated normal vector, identified by the arrow identified by legend "n," for a point z(x, y) on the surface of thle object 2'30, is to be determined. The normal vector identified by legend r," presents the normal to thdie surface prior to the updating. The illumination direction is represented by the line extending from the vector corresponding to the arrow identified by legend "L" "L" specifically represents an illuination vector whose direction 15 is based on the direction of ilhurmination from the light source illuminating the object, and whose magnitude represents the magnitude of the illumination on the object provided by the light source. In that case, based on dithe updating, the set of possible new normal vectors lic on the surface of the cone 231 which is defined bv: n .L=1 (2.04) 20 that is, the set of vectors for which the dot product with the illumination vector corresponds to the pixel value "I" for the pixel after the updating of the shading as provided by the oplrmtor. In addition, since the normal vector n- is, as is the case with all normal vectors, normalized to have a predetermined magnitude value, preferably the value "o'ne," the updated normal vector has a magnitude corresponding to: n~n 1 iHni 1 =1 (2.05) 25 where " n " refers to the maugni rude of updated normal vector n: Equations (2.04) and (2.05) define a set of vectors, zand the magnitudes of the respective vectors, one of which is the updated normal vector for the updated obect at point z(xy). The computer graphics system 200 will select one of the vectors from the set as the appropriate 30 updated normal vector nm as follows. As noted above, the updated normal vector will lie on the surface of cone 23 L It is apparent that if tie original nonal vector no and the illumination vectorL are not parallel, the n they (that is, the prior normal vector nn and the illumination vector L) will define a phne. This follows since the point z (x,y) at which the illumination vector L impinges on thle object 230, and the origin of the nomial vector n on object 230, is the I2 WO 2007/079361 PCT/US2006/062405 same point, and the tail of the illumination vector and head of the prior normal vector no will provide the two additional points which, with the point z(x, y), suffices to defined a plane Thus, if a plane, which is identified by reference numenral 232, is constructed on which both the illumination vector L and the prior normal vector n lie, that plane 232 will intersect the cone 5 along tvo lines, which are represented by lines 33 in FIG, 7, One of the lines 233 lies on the surface of the cone 231 which is on the side of the illumination vector L toNards the prior normal vector n,, and the other line 233 lies on the surface of the cone 231 which is on the side of the illumination vector L. awa from thei normal vector no, and the correct updated normal vector ni is defined by the line on the cone 231 which is on the side of the illumination vector L towards the 10 prior nonnal vector nn. Based on these observations, the direction of the updated normal vector can be determined from Equation (2.04) and the following. Since tihe prior normal vector nl and the illumination vector L tfori a plane 232, their cross product, "ne xL" defines a vector that is nomial to the plane 232. 'Thums, since the updated nomal vector n also lies in the plane 232, the 15 dot product of the updated normal vector a with the vector defined by the cross product between the prior normal vector rn and the illuminiation vector L has the value zero, that is: n,.(naxL)=O (2.06) In addition, since the dift'erence between the pixel values L and , provided by the prior shading and the updated shading is bounded by c. (Equation (2.03) above), the angle J between 20 the prior nomial vector no and the updated normal vector nm is also bounded by some maxinmumn positive value s. As a result, Equation (2.06) can be re-written as (, n L < (2.07) This is illustrated diagrammatically in FIG. 8. FIG. 8 depicts a portion of the cone 232 depicted in FIG. 7, the updated normal vector n[, and a region identified by reference numeral 25 234, that represents the maximum angle ca from the prior normal vector in which the updated nomial vector nr, is constrained to lie. The computer g-raphics systemic 200 (FIG. 5) will generate an updated normal vector n: for each pixel 0, in the image plane 220 based on the shading provided by the operator, thereby to generate an updated vector field N (x, y) After the computer graphics system 200 has generated 30 the updated normal vector for a pixel, it can generate a unew height value z(x, y) for that pixel, thereby to update the height field H(x, ) based on the updated shading. Operations performed by the computer graphics system 200 in connection with updating the height value z(x, y) will be 13 WO 2007/079361 PCT/US2006/062405 described in connection with FIGS. 9 and 10. FIG. 9 depicts anm illustrative updated shading for the image plane 220 depicted in FIG. 6. For the image plane 220 depicted in FIG. 9, the pixels Ss.; have been provided with coordinates, with the rows being identified by numbers in the range from I through 8, inclusive, and the columns being identified by letters in the range A through I inclusive. As shown in FIG. 9, in the updated shading, the pixels through . whi'v aig thc roj, ,dgr .. through ,e and -. . through ,: , have all been modified, and the computer graphics system 200 is to generate an updated height value h (x, y) therefor for wuse as the updated height value for the pixel in thie updated height field H(x, y). To accomplish that, the computer graphics system 200 performs several operations, which will be described below, to generate a height 10 value ftor each pixel ( x. whose shading has been modified along a vertical direction, a horizontal direction, and two diagonal directions, and generates the final height value for the pixel as the average of the fotur height values (that is, the height values along the vertical, horizontal, and two diagonal directions). The operations perforned by the computer graphics system 200 in generating an updated 15 height value will be described in connection with one of the modified pixels in the image plane 220, namely, pixel G ,f . along one of the directions, namely, the horizontal direction Operations pCrformed in connection with the other directions, and the other pixels w hose shading is updated, will be apparent to those skilled in the art. In generating an updated height value, the computer graphics system 200 makes use of Bezier-Bernstein interpolation, which defines a 20 curve P(tr) of degree "F as P => Bi 1Ift , (2.08) where I is a numerical parameter on the interval between 0 and 1, inclusive, and vectors B (defined by components (i,. i b, b)) define "n+I " control points for the curve P(Q), with control points bl and A, comprising the endpoints of the curve, The tangents of the curve P(t) at the 25 endpoints correspond to the vectors oB and BJ.B,, In one embodiment, the computer graphics system 200 uses a cubic Bezier-Bernstein interpolation P, ) )=Bo(I-) +3B i--t-)+3Bfi (1-t)+B 3 ? (2.09) to generate the updated height value. The points BI 0, :,. , and B, are control points for the cubic curve P,.t). 30 Equation (2.09), as applied to the determination of the updated height value h, for the pixel ", corresponds to 14 WO 2007/079361 PCT/US2006/062405
I
1 =ha (1-t2+3BIt(1-ty+3B+(1-t)+hf (2.10) It will be appreciated from Equation (2, 10) that for 0, the updated height value hi for pixel 1). corresponds to h, hich is the height value for pixel G,. and for t = 1, the updated height value h, for pixel ), corresponds to i,, which is the height value for pixel As. On the S other hand, for t having a value other than 0 or 1, the updated height value hbi is a finction of the height values h,, and h of the pixels OVc. and and the height values for control points B, and
B
2 . As noted above, for an n degree curve P(), the tangents at the endpoints Be and B correspond to the vectors BB and B,..Bi,. Thus, for the curve PA,, () shown in FIG. 10, the 10 vector B.,1 that is defined by endpoint B, and adjacent control point Be is tangent to the curve P,..t) at endpoint & and the vector B, 3 defined by endpoint B3 and adjacent control point B, is tangent to the curve at endpoint B2. Accordingly, the vector Be0 is orthogonal to the normal vector ni, at pixel @c,, and the vector B2BA is orthogonpal to the normal vector la at pixel @ .. Thus. 0=(B.-Bo)n, and 0=(B 2 -B3)n (2.11) 15 which leads to 0=(B 1 -h n, and 0=(-hb) (2.12) For the determination of the updated height value h for the horizontal direction (see FIG. 9), the Equation (2.10), which is in vector fonl, gives rise to the following equations for 20 each of the dimensions "X" and "z" (the "r' dimension being orthogonal to the image plane): h,=h,,(J1-t +3b -t(1- t +3b2t( 1-t)+h (2.13) and Ix=hJAiz(1-4 3 +3b.t(1-i 2 +3b 2 2 (i-1)+hbt (2, 14) 25 where the x and z subscripts in Equations (2.13) and (2.14) indicate the respective .x and z components for the respective vectors in Equation (2.10). It will be appreciated that, for Equations (2.13) and (2.14)., only value of the z component, hr, of the height value is unknown: the value of the "x" component, k, will be a function of the position of the pixel whose height value is being determined, in this case pixel D;, In addition, Equation (2 12) gives rise to the 30 following two equations 15 WO 2007/079361 PCT/US2006/062405 t t+3b 2 ( + (21 15) and h 1 z=ha41(1- 3b 1 t( -t1-t)"+3b2 (1-t)+h at (2 16) 5 where subscripts .x, y and z in Equations (2.15) and (2.16) indicate the respective x, y and z components for the respective vectors in Equation (2.12). In addition, as noted above, there is the further constraint on the curve P§,.(t). in particular the constraint thlint the updated normal n I be normal to the cunrve at the point corresponding to pixcl ;a. If the vector B la B, in FIG. 10 is tangent to the curve at the point 10 corresponding to pixel G., tthehe point h i, whose z component corresponds to the updated height value, also lies on the vector B, lB. Thus, O= (ho 1 -h n 1 (2.1 7) and (B123-h 1) (2.18) 15 Based on the convex combination depicted in FIG. 10, •B02 = Bo + t(.B12 - 801) (2. L9) = Bo (1 - r) + B 12 ( and
BI
23 = Bi + (.B2 3 - BI) = 812(1 - 1) + B 23 (2.20) 20 which lead to Bo 12 =Bo+t(B-E )+/[B +t(,-B)-Bo-t(B -) .(2.21) and Bz=+t( B +t,+(B-B)-B-t(B,-B] (2.22) Coinmbining Equations (2,17), (2.19) and (2.,21) 0 = ( 0 i - 1) + hB121 - 1 1.)- n .(2.23) =(BO(I- ) + 2Bt(-t) +.B2 - )-i 25 which leads to 16 WO 2007/079361 PCT/US2006/062405 0=(bc(1-t+2bt(.-t)+bf-h)n , and 0=(bo41-t +2b 1 s ( - t)+b, -h')n 1 24) for the x and z components of the respective vectors. Similarly for Equations (2.18), (2 20) and (2.22), 0=(b 1 l-t)2+2baj( 1 -t)+b 3 f-h,-)n and 0=(by(1-f)+2barr( 1 -t)+bzf-nzz(2 25) - t)+b .,, z
-
i . -1'z 5 for the x and z components of the respective vectors. It will be appreciated that the eight Equations (2.13) through (2.16), (2.24) and (2.25) are all one-dimensional in the respective and z components. For the Equations (2.13) though (2.16), (2.24) and (225), there are six unknown values, namely, the value of parameter t, the values of the x and: components of the vector B (that is, values b, anId b), the x and z 10 components of the vector B- (that is, values b., and b), and thel Z component of the vector hi (that is, value hI to the point P,(t) for the pixel 9... The eight equations (2.13) through (2.16) (2;24) and (2.25) are sufficient to a system of equations which will suffice to allow the values for the unknowns to Lbe determ ined by methodologies which will be apparent to those skilled in. the art. 15 The computer graphics system 200 will, in addition to performing the operations described above in connection with the horizontal direction (corresponding to the "x" coordinate axis), also perfobrmn corresponding operations similar to those described above for each of the vertical and two diagonal directions to determine the updated height vector h: for the pixel G4. After the computer graphics system 200 determines the updated height vectors tobr all four 20 directions, it will average them together. The : component of the average of the updated height vectors corresponds to the height value fotbr the updated model for the object The operations performed by the computer graphics system 200 will be described in connection with the flowchart in FIGS. II A and II B. Generally, it is anticipated that the operator will have a mental image of the object that is to be modeled by the computer graphics 25 system. With reference to FIG. 1 IA and 11 B, the initial model for the object is determined (step 250), mand the computer graphics system displays a two dimensional image thereof to the operator based on a predetemined illumination direction, with the display direction corresponding to an image plane (reference image plane 20 depicted in FIG. 6) (step 251). As noted above, the initial model may define a predetermined default shape, such as a hemisphere or ellipsoid, provided by 30 the computer graphics system, or alternatively a shape as provided by the operator. In any case, 17 WO 2007/079361 PCT/US2006/062405 the shape will define an initial normal vector field N(x, y) and height field H(. Y), defining a normal vector and height value for each pixel in the image. After the computer graphics system 200 has displayed initial mniodel, the operator can select one of a plurality of operating modes, including a shading mode in connection with the invention, as well as one of a plurality of Sconventional computer graphics modes, such as erasurc anid trirnuing (step 252). If fthe operator selects the shading mode, the operator will update the shading of the tvo-dinmeansional image by means of, for example, the systenim's pen and digitizing tablet (step 253). While the operator is applying shading to the image in step 253, the computer graphics system 200 can display the shading to the operator. iThe shading that is applied by the operator will preferably be a 10 representation of the shading of the finished object as it would appear illuinated from the predetennied illumination direction, and as projected onto the image plane as displayed by the computer graphics system 200. When the operator has updated the shading for a pixel in step 253, dthe computer graphics system 200 will generate an update to the model of the object. In generating the updated model, 15 the computer graphics system 200 will first determine, for each pixel in the image, an updated normal vector, as described above in connLcction with FIGS. 7 id 8, thereby to provide an updated normal vector field for the object (step 254). Thereafter, them computer graphics system 200 will detennrmine, for each pixel in the image, an updated height value, as described above in connection with FIGS. 9 and 1l0, thereby to provide an updated height field for the object (step 20 255). After generating the updated normal vector field and updated height field, thereby to provide an updated model of the object the computer graphics system 200, will display an image of the updated model to the operator from one or more directions and zooms as selected by the operator (step 256), in the process rotating, translating and scaling andl/or zooming the image as 25 selected by the operator (step 257). If the operator determines that the updated model is satisfactory (step 258), which may occur if, tfor example, the updated model corresponds to his or her mental image of the object to be modeled, he or she can enable the computer graphics system 200 to save the updated model as the final mrnodel of the object (step 259). On the other hand, if the operator determines in step 257 that the updated model is not satisfactory, he or she can 30 enable the computer graphics system 200 to retum to step 25 1 Returning to step 252, if the operator in that step selects another operating mode, such as the erasure mode or a conventional operational mode such as the trimming mode, the computer graphics system will sequence to step 260 to update the model based on thel erasure information, or the tunimmingr and other conventional computer graphic iniformtion provided to the computer 35 graphics system 200 by the operator. Thle computer gmaph.ics system will sequence to step 257 to 18 WO 2007/079361 PCT/US2006/062405 display an image of the object based on the updated model. If the operator determines that the updated model is satisfactory (step 108), he or she can enable the computer graphics system 200 to save the updated model as the final model of the object (step 259). On the other hand, if the operator determines in step 257 that the updated model is not satisfactory, he or she can enable S the computer graphics system 200 to return to step 251 The operator can enable the computer graphics system 200 to perf brm steps 251, 253 through 257 and 260 as the operator updates the shading of the image of the object (step 253). or provides other computer graphic infonnation (step 260), and the computer graphics system 200 will generate, in steps 254 and 255, the updated normal vector field and updated height field, or, 10 in step 260, conventional computer graphic components. thereby to define the updated model of the object. When the operator determines in step 258 that the updated model corresponds to his or her mental image of the object, or is otherwise satisfactory, he or she can enable the computer graphics system 200 to store the updated nornal vector field and the updated height field to define the final model for the object (step 259). 15 The invention provides a number ofadvantages. In particular, it provides an interactive computer graphics system which allows an operator, such as an artist, to imaginL e thie desired shape of an object and how the shading on the object might appear with the ob ject being illuminated from a particular illumination direction and as viewed from a particular viewing direction (as defined by the location of the image plane). After the operator has provided some 20 shading input corresponding to the desired shape, the computer graphics system displays a model of the object, as updated based on the shading, to the operator. The operator can accept the model as the final object, or altematively can update the shading further, from which the computer graphics system will further update the model of the object. The computer graphics system constructed in accordance with the invention avoids the necessity of solving partial differential 25 equations, which is required in prior art systems which operate in accordance with the shape from-shading methodology. A further advantage of the invention is that it readily facilitates the use of a hierarchical representation for the model of the object that is generated. lThus, if, for example, the operator enables the computer graphics system 200 to increase the scale of the object or zoom in on the 30 object thereby to provide a higher resolution, it will be appreciated that a plurality of pixels of the image will display a portion of ithe imniage which, at the lower resolution, were associated with a single pixel In that case, if the operator updates the shading of the image at the higher resolution, the computer graphics system will generate the normal vector and height value for each pixel at the higher resolution for which the shading is updated as described above, thereby to generate 35 and/or update the portion of the model associated with the updated khading at the increased 19 WO 2007/079361 PCT/US2006/062405 resolution. The updated portion of the model at the higher resolution will be associated with the particular portion of the model which was previously defined at the lower resolution, thereby to provide the hierarchical representation, which may be stored. Thus, the object as defined by the model inherits a level of detail which corresponds to a higher resolution in the underlying surface pSpresentation. Corresponding operations can be performed if the operator enables the computer graphics system 200 to decrease the scale of the object or zoom out from the object, thereby providing a lower resolution. It will be appreciated that a number of variations and modifications may be made to the computer graphics system 200 as described above in connection with FIGS. 5-1 l. For example, 10 the computer graphics system 200 can retain the object model information, that is, the normal vector field infonnation and height field information, for a number of uplxiates of the shading as provided by the operator, which it (that is, system 200) may use in displaying models of the object for the respective updates. This caa allow the operator to view images of the respective models to, for example, enable him or her to see the evolution of the object through the respective 15 updates. In addition, this can allow the operator to return to a model from a prior update as the base which is to be updated. This will allow the operator, for example, to generate a tree of objects based on ditibrent shadigs at particular models, In addition although the computer graphics system 10 has been described as making use of Bezier-Bemstein interpolaticn to determine the updated height field h (x, t), it will be 20 appreciated that other forms of interpolation, such as Taylor polynomials and B-splines, may be used. In addition, multiple focus of surface representations may be used with the invention. Indeed, since the model generation methodology used by the computer graphics system 200 is of general applicability. all free-form surface representations as well as piecewise linear surfaces consisting of, for example, trianggles, qiadrilaterals and/or pentagons can be used. 25 Furthermore, although the computer graphics system 200 has been described as making use of an orthogonal proqJeetion and a single light source, it vill be appreciated that the other forms of projection, including perspective projection, and multiple light sources can be used. In addition, although the computer graphics system 200 has been described as providing shape of an object by shading of an image of the object, it will be appreciated that it may also 30 provide computer graphics operations, such as trimiming and erasure, through appropriate operational modes of the pen 204 and digitizing tablet, Furthermore, although the computer graphics sy stem has been described as generating a model of an object on the assumption that the objects surface is Lambertian, it will be appreciated that other surface treatments may be used for the object when an image of the object 35 is rendered. 20 WO 2007/079361 PCT/US2006/062405 It will bhe appreciated that a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a general purpose computer system, or any combination thereof any portion of which may be controlled by a suitable program. Any program may in whole or in part comprise part of or be stored on the system in a. conventional 5 mamner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner. In addition, it will be appreciated that the system may be operated and/or othcrwvise controlled by means of information provided by an operator using operator input elements (not shown) which may be connected dirctly to the system or which may transf r the information to the system over a network or other 10 mechanism for transferring infom-ation in a conventional manner, II. nprovements to the SBS Modeler 3.1 Introduction The above-described systems and techniques have undergone significant development, 15 Section 3.2 sets forth a short summary of the SBS shading and shaping process. Sections 3.3 through 3.7 describe specific extensions anid other improvements to the SBS shading and shaping process. 3,2 The SBS Shading and Shaping Process FIG. 12 shows a flow diagram illustrating the SBS shading and shaping cycle 300. 20 Step 301: -Hierarchical subdivision surfaces, polygon meshes and Non-Uniform Rational B-Spline (NURBS) surfaces are supported as input surfaces to Shape-by-Shading (SBS). The internal algorithms of SBS use properties of 4irarch.ical subdivision surfaces, so each of the latter two types is converted to a subdivision surface before shading begins. Step 302: Once a subdivision surface is in place and displayed to the user, it is matched 25 to a 2D model view, including information about grid corners, grid width and height, pixel size and camera to object transformation. Steps 303-305: Using the 2D model view, the user sets a lighting direction, tunes iiput parameters, and shades, i.e., modifies the intensities of selected pixels, or loads a set of pre shaded pixels Thvis information is passed to the shaping algorithm 306 30 Steps 306-309: The shaping algorithm 306 determine the correct geometric alterations to make to the surface, More surface primitives are added where needed via subdivision in the area of the shading in order to ensure that sufficient detail is present (step 307). A height field is found that reflects in 3D the changes that were requested in the 2D setting (step 308), and the subdivision surthce is then altered so that it reflects these heights (step 309). 21 WO 2007/079361 PCT/US2006/062405 The result is a shaped hierarchical subdivision surface that can be altered further (steps 302-309), saved (step 310). or converted to the desired output surface type. 3.3 Surface Handling Improvements The SBS systems and techniques described in Section IL above, are designed for 5 processing NURBS surfaces. The presently described systems and techniques extend SBS to accept any hierarchical subdivision surface, polygon mesh, or NURBS surface. The incoming mesh is converted to a hierarchical subdivision surface if it is not already one, and the resulting subdivision smurfahe is the one on which dthe SBS shading and shaping cycle is per-fomed. Adaptive subdivision is used to add detail to the surface, and analysis and synthesis are used to 10 propagate changes to all levels of the surface, allowing for modifications at specified levels of detail. The Hierarchical Subdivision Surface (HISDS) library of method images@ provides all subdivision support needed. Features of the HSDS library are set forth in patents owned by the owner of the present patent application. The subdivision surface miodel tlIat results fom ithe SBS process can be converted to another surfhce type if desired. In this way, SBS allows both for 15 flexibility in choosing incoming and outgoing mesh types and akes advantage ofhierarchical subdivision properties in its algorit las. 3.4 Additional Shaping Algorithm The surface of interest, which is assumed to be continuous, is projected orthographically onto the viewing plane. This projection has an associated height field, whose intensities are 20 determined by one light source with a Lambertian reflectance map so that the discrete intensity I at the point (u, v) in the model view ofa given surface with height field H is defined by: N (3,t0 v where .N is a discrete nortonmal to the surface and ( is a unit vector that points in the direction of the light source, which is infinitely tar away The intensities of selected pixels on the projected surface are changed by means of 25 shading or loading a pre-defined set ofpixels. lhe SBS shaping algorithlm finds a shape determined by the shading One solution method involves Bdzier-Bemrnstein polynomials. A new technique for interpreting 2D shading in the model view as 3D shape on tlhe surface is now implemented in SBS and is described in the remainder of this section. The technique described herein produces a set of height increments F over the model 30 view that minimizes the following: (4 U v) - Jrvj)} 22 + AC 2 2l WO 2007/079361 PCT/US2006/062405 where (4. denotes the curvature of a sIurface with associated height field H. A is a constant called the smoothing coefficient and the sum is performed over pixels in the model view that intersect the interior of the projected surface. Let P be the set of pixels in thei model view whose intensities have been modified. This 5 set is called the set of modified pixels. It is possible, through a series of simplifications, to reduce the pixels over which Function 3.02 is summed to Q= nbhd(P) f S. (3.021) I addition, the set of height increments F can be reduced to a vector x containing one enty for 10 each pixel or connected area whose corresponding heights value may be altered by the algorithm. The size of I matches that of the height field of the projected surface. The vector x: reduces the size based on the number of unique aon-zero height increments n IF, a potentially much smaller set. Function 3.02 can then be reduced to the unconstrained minimization of /(z -~ ll~~q 4 qt+ Arnm.qd (3. 0 3 The reduced fumction is less computationrially intensive to minimize, both because it is 15 only necessaurv to sum over a neighborhood of the modified pixels instead of all pixels in the model view and because the dimension of the minimization problem is reduced form the size of F to the length of x. The method used to minimize Function 3.03 is the Trust-Region method. First, the fimnction is modeled by the quadratic function F(z) =f () + V f + (3.04) 20 where Vfis the gradient vector offand V f is the Hessian matrix off Then, the model F is minimized in the region lx i A, fibr some A > 0. The method used to do the minimization is the CG-Steinhaug method with a special sparse matrix multiplication. A test value is built from the resulting minimum point .r and is f (as : f .(...+ 24 If p is close to 1, then F is considered a good model fobr fwithin the trust region. The 25 center of the trust region is moved to x, and the trust region radius is increased. If p is far avay from 1, then the radius of the trust region is decreased. The process is repeated, i.e., a minimum tbfor F in the new trust region is found. The criterion to stop the process is if VJ is sufficiently WO 2007/079361 PCT/US2006/062405 small at the center of the current trust region, i.e., a local minimum of fhas been attained. It can be proven that this method converges. 3.5 The SBS C++ API A C++ applications programming interface (API) has been developed recently for SBS. S It works above the mental matter library and requires ibmenaimater ib at linking time. T he mental matter library is initialized and tennimated internally within the SBS library, llere are three main SBS classes: miSbs sur/ace miSbs ogI view 10 mlSis solver Access anid creation methods for these are provided by miSs modee. hnitialization of niSs module is implicit mand is done when first accessing the class. An istance of miSbs n odule is returned by its static method get. Thie iermnate method must be called when unloading the library. SBS uses objects from the miC(api ci subsurfclass of the 15 mental matter libauvi , which are wrapped into a miSs surfi ce object via the create sur ace method. A plugi. writer may provide the SBS API with an instance ofmiCq) .apicd subsurfor with a tessellated mesh in the fbrm of a mGeoBo Anodither possibility is not to provide a surface, in which case the library creates a wrapper holding an empty subdivision surface. Other methods in the miSbs module class include cream t vie wer and get solver. 20 The nmiSbs surface class is used in calls to miSbs solver to access the SBS shaping algorithm mand in m ibstsogl view, for display and interaction, miSbs s...zrfe implementations are instantiated using the create s rice method of miSbs mnoule, and are destroyed with flthe destroy method. Other methods in the mi Ss sur ce clss include get subsurtf get depth and convert (which converts mesh indices to and from surf indices) Other related classes are 25 provided to speed up integration. For mstance, miSbosAa ..iesh hides thei technical details of converting 3ds max*. meshes to and fiom the miSbs sm/ace object. lThe minSbs ag. iew class may be used to display an instance of mSis scr e miSbs agy i, implemetations are instantiated using the create viewer method of mniSbs imodue and are destroyed with the &strov method. The miSbs aig/ view entity does not 30 maintain any reference to a given miSbs surfice instance. It only carries data related to its mesh representation (possibly simplified), and auxiliary graphic data such as OpenGiL contexts and triangle strip buffers. Methods of the miSbs .. og. view class include set etng.s and iqate, as well as a set of methods around the 2D projection of mesh vertices and faces. These include get jace in pixeloet Ys injce, get vx pixels , get 2d dishmnce 'o vlx, get a ce color, 35 set fice color, reset ace colors, set pixel be.fr, get pixelsno, and copy pixel u/ffr. An 24 WO 2007/079361 PCT/US2006/062405 additional helper class, nibs-wind32 viekport, is provided to be hooked utip on an existing window. It provides basic refresh and message handling capabilities. A third class, called miSbs solver, is exposed in the API to perform SBS operations. It is implemented as a static, stateless instance and is accessed via the get solver method of S miSbs manodue . Its methods include get defindt seitftings, udjpct (the main algorithm that determines 3D shape from 2D shading), and cancel 3.6 A Prototype: SBS Plugin for 3ds max SBS has been implemented recently as a modifier for 3ds max " from Autodesklhdisemet*, The modifier allows an artist to perform SBS modeling by shading in 2D 10 with a simple brush directly on the 3ds max viewports. All the fimctionality of the SBS modeling library is available within the plugin. FIG. 13 shows a screenshot 350 of lthe SBS Modifier in 3ds max. The plugin features include: Shading tool with a basic 2D paint package and the ability to 15 load and save shadings; Light controls; Parameter tuning; Update of surface shape based on shading information, light direction and input parameters; 20 Usido/redo that is internal to the modifier: Tool for selecting fthe area to be updated (masking); and Selection tool. with standard subdivision surthce manipulations. 3.7 Extensions to SBS It is contemplated that the above-described systems and techniques may be enhanced in a 2.5 number of ways. For example, these systems and techniques may be modified to include dithe following: Mesh-Based SBS: The ability to run the SBS process on polygon meshes, without first converting to and/or using properties of subdivision surfaces. Real-Time Mesh Display: The ability to display large, complex meshes at an interactive 30 rate. Other modifications may include view-dependent simplification and custom mesh operations. Trimming: The ability to trim arbitialy across surface faces. Creasing: The ability to crease arbitrarily across surface fices. Shape from Contour Lines: The ability to sketch contour lines to produce an initial 3D 35 shape, as a tool to complete the SBS modeling process. 25 WO 2007/079361 PCT/US2006/062405 NURBS-Based SBS: The ability to run the SBS process on Non-Uniform Rational B-Spline (NURBS). IV. Shaping Methods and Algorithms Implemented in the SBS Modeler 4.,1 Introduction The goal of the Shape-By-Shading (SBS) process is to interpret vtwo-dinensional (2D) shading as a three-dimensional (31)) shape. There are described herein a number of techniques and systems for accomplishing this. It is assumed, for the purposes of the present description, that a grayscale shading has been done on the projection of a surface onto a viewing plane, that 10 the underlying surface is continuous, and that the grayscale iateansities of the projected surface can be described by a mathematical scheme. Let H denote the height values in camera space associated with the part of a surface visible in the viewing window, let 1, denote the intensity of the surface given lighting condition t, and let C. denote the curvature of the surface. The SBS shading algorithm tries to find increments 1- to be added to the height values H that minimize the 15 flowing function: where A is a positi ve constant called the smoothing coefficient and the integration is performed over the area of the pvojected surf ice in the viewing window. The ideal result is a new set of surface heights with smoothness determined by A whose intensities match those of the shading. 20 The 2D shading results in a 3D modification. The SBS process must be done in an efficient way and one that does not disturb the continuity of the underlying surface. In order to move from a theoretical setting to a computational one it is necessary to discretize, which is discussed in the next section. 4.2 Rasterization 25 In order to find a solution for SBS, the problem must be adequately described and rasterized for discrete calculation. 42.1 Model View The viewing window is rasterized as a rectangular grid of pixels, M, that have integer valued coordinates (u, v) and corresponding camera space coordinates (x., ). This rasterization is 30 called the "model vie ." 26 WO 2007/079361 PCT/US2006/062405 A neighborhood of raster pixel (un, vi) -Mis defined to be all pixels (u, v) E Msuch that ni - t ~1 and v -v < 1, including the pixel itself The neighborhood of a set of pixels A is denoted by nbhd(A). A piel is defined to Le on the boundar - of set A if the pixel is a member of A but one or more of its neighbors is not in A. The boundary of A. is denoted A,. The interior 5 of A is defined to be A ::: A \ 0A. where the symbol "\' denotes set subtraction. Pixels in the interior of the model view have a neighborhood containing 9 pixels. 4.2,2 Projection of the Surface The SBS algorithm assumes orthographic projection of the surface onto the viewing plane. S denotes the set of pixels in the model view that intersect the projected surface. The 10 height field of the smtaee, denoted by H, is defined over the pixels in S and contains the heights of the surface in camera space. H (u, v) is the floating-point height of the part of the surface visible at pixel (u, v) in the model view. 4.2.3 First Derivatives It is assumed that raster points in the model view arc spaced so that the vertical and 15 horizontal distances between themt are equal, i.e., that the model view pixels are square axd are of uniform size. However, it should be noted that it would also be possible for the presently described systems and techniques to be implemented with respect to non-square pixels. Non-square pixels can be used, for example, in a mesh-based implementation of SBS. In that case, projected primitive vertices can be used to define the pixels. 20 Let c be the floating-point width (height), in camera space, between neighboring raster points. The discrete first derivative in the u-direction of a surface with height field H is based on a simple slope formula across two pixels that straddle the point of interest and is defined to be Hf(u+1 ,v) -H(u -1, -,) D[ H (ui, v) . , ) .. .H.... 1. .) (4.02) Similarly, the discrete first derivative in the v-direction is defined to be H(a u + I) - H ( 1) (4. 0 3 DaH! u, a) = (j < U4.03) 2e 25 4.2.4 Surface Normal A discrete normal to a surface with height field H is defined to be N (a, v) = < -D) H u v), -DH ) 1 > (4.04) 27 WO 2007/079361 PCT/US2006/062405 which is the cross product of the surface tangent vectors <-, 1, 0, D, H(u, v) > and < L, L, D- H(u, v) >. 4.2.5 Lighting Conditions and Intensity In SBS it is assumed that the lighting condition is Lambertian with one light source, S However, it would also be possible to use another lighting model, and/or to use multiple lighting sources. The light source is described by a unit vector that points in the direction of the light, which is infinitely far away. The discrete intensity I of a given surface with height field H and light vector f is defined by I ) , , (4.05) 10 This formula produces scalar values between 0.0 (black) anid 1.0 (white), and implies that areas of the surintface that ftce toward the light source are lighter than those parts that face away. 4,2.6 Second Derivatives The discrete second derivatives ofa surface with height field H are based on using the 15 alternative formulas H(u---1,v --- H(uv) (4 051) and H( - I---H(,tv) (4.052) for the discrete first derivatives in the u- and v- directions, respectively. The discrete second 20 derivative in tde u-direction is defined to be DHuI( v H D- ( - u-,D ID v)) E-".-(4.06) H(t + 1o v) 1 Hu v1,) - 2H(u, v) Similarly, JH., H(,v -- -+ (~ , -- 1) 2H(, v) 47) D (, ) (4.07) and 28 WO 2007/079361 PCT/US2006/062405 < JI ,v) =-H(u + 1, + 1) + H(a- 1,v - 1) + 2.H(u, v) - J (u, (4.08) D........... ... H... v ).......... (4.08) where J~ .. v) =.H( + 1, v) + H(u 1, + H(u, v + 1) + Hu, v - 1) (4.081) 4.2.7 Curvature The discrete curvature of a surface with height field H is defined to be the nonnegative scalar Cu., v) =- < Df Hluv), D H(a, c), D ,H 4, t) > (4.09) SIf .H(u,v) H(u, + DIH(u ,vv t 4.2.8 Discrete Function to Be Minimized A discrctized version of Function 4.01 caji be made by using the definitions of discrete 10 intensity and curvature. Since those formulas depend on neighboring pixel values, only pixels that are both in the interior of the model view and in the interior of the projected surface will be considered as being contained in the area of interest. The discrete version of Function 4.01 is defined to be f () = Y (I ( ., v) - I(u, v) ~ + AC H)+r ((, v) 15 where A is a positive constant called the smoothing coefficient and I) is a set of height increments that is defined on the projected surface S. T'he condition FI(u, v) = 0.0 for (u. v) E- S ( S .J AM) is imposed in order to avoid artifacts on the surface at the boundary of its projection onto the model view or at the edge of the model view. 4.3 Reduction of the Function to Be Minimized 20 In this section it is shown that Function 4 .10 need only involve a stm over pixels in a neighborhood of those pixels whose intensities were modified by the user rather that all pixels in. S- M. In addition, it is shown that the dimension of the function can be reduced by representing the proposed height increments in a more compact way 43.1 Modified Pixels 25 Let P = {Po, ... P --- }, wl here p (u, vi) C- M for each i,. (4.101) 29 WO 2007/079361 PCT/US2006/062405 be the set ofpixels in the model view whose intensities the user has modified. P is called the set of -modified pixels" ' and it is assumed that P < SrM (4. 102) since neighboring information is needed to calculate intensity. S4.3.2 View Code Thie view code aids in the development of reduction in calculation. It categorizes pixels by proposed height increment. The notions of path and connected set are needed to build it. A path from pixel ( u , v0) to pixel (. .) is defined to be amy sequence ofpixels 04.0 Wo), (uh U, , (, vj) (4103) 10 such that (i, v4) E nbhd( (t...I Vi... )}) for 1 < i < J. (4.104) Set A is a connected component of set B if A B and for each (u, v,), (u,, v) A there is a path from (U, wv) to (uv, v) completely contained in A. The view code assumes that the height increments P are constant on connected components of S, P. In order to avoid surface artifacts, it 15 is assumed that this constant is 0.0 if the connected component intersects OS' U 3M. Let T = (T , 7, 1, .. J.,. ) be a partition of S \ 'P such that 71.: is the union of all connected components that intersect OS U OM and 7 for 0 i < n, is a connected component that is maximal in the sense that there is no connected component of S\ that properly contains it. Then thle set F of increments has at most m +In unique non-zero values, one for each of the 20 modified pixels and one for each i- such that / -1. The "view code" is defined to be -. =1 (u, v) e T_ V(u v) = . u, ) = p P (4.11) Tm, + i (i, v) Ti This means, for instance, tht if0 V (z., 9,) <m then it is known that (u0, v ) is a modified pixel. Also, if V (uI, vx) 1 then pixel V (un, N) will not be altered as a result of the 2D shading 25 information. This is also true of pixels for which no view code is assigned, i.e., pixels not on the projection of the surfachee onto the model view. 30 WO 2007/079361 PCT/US2006/062405 4.3.3 Function Pixels and a First Reduction in Calculation Moving the heights of a section of the surface by a constant amount does not change the intensity or curvature of the interior of that section. Since the pixels in each Ti are moved by a constant amount and T is a partition of S \ P, then the only possible pixels at which . 1 (u1 . ) I1,(t v) (4.111) 5 are contained in nbhd (P) n ,. Let Q = bhd(P) inSn = {q: ,q:.,o,,, g...:t}(4 12) where q, denotes a pixel (u,, vQ) in the model view. Q is called the set of"nfimtion pixels," and Function 4.10 can now be reduced to f (Y) =L 4 ,() - i (q): + ACt+r(q,)) + K (4 13) i= 0 10 where K, which is the sum of ACr+r ( , ) (4.13) over the set ( 1 () \ Q, (413 2) 15 is a constant. 4.3.4 Increment Vector and a Second Reduction in Calculation Since there are at most m + n unique non-zero height increments in F, the fmuction to be minimized need only be a function of those inicremenits, as identified by the view code. L.et : =-< X0, :1j -..,, .t+j---1 > (4.14) 20 where x; is the increment value of ' for all pixels such that V(n., v) i. .x is called the "increment vector." For (n.) E$\T-.1, (4.141) let (, v) = H (u., v) + ( (4.5) 25 Function 4.10 can then be reduced to a minimization of 31 WO 2007/079361 PCT/US2006/062405 f ( ) = (u, V) - (u, v) + AC , (.. (4. 16) 4.3.5 Objective Function Functions 4.13 and 4.16 can be combined to fomn the following function which when minimized is equivalent to the minimization of Function 4.10: 1 f(r) 7 (I (q) -I(q) + ,\C., (q)) (4.17) i=0 The minimization can now also be unconstrained. Function 4.17 is the final version of the discrete function to be minimized by the shaping algorithm. It is called the "objective function" and is of dimension m + n. 4.4 The Trust-Region Newton-CG Method 10 4.4.1 Overview SBS uses the Trust-Region Newton-CG method to minimize tihe objective function (Function 4.17). The main ideas of Trust-Region methods are to set utip a quadratic model for the function, to minimize the model function within a "trust region." to adju ist the trust region according to certain criteria, to minimize the model function widin the new trust region, to adjust 15 the region again, to minimize again. etc. Given certain restrictions, such a method is guaranteed to converge to a point corresponding to a local minimum of the original function. The Trust-Region model of the objective function centered at an increment vector .. =< x .. 4 -. +-1 > (4.171) is Fo(x) = f() + VfV<0. x. + Ty . )x (4.18) 2 where Vf is the gradient vector E l(':, , =< f....... ... ,.......,::....... ~ -!)...>( .9 VX) <:----- f(o [ > (4.19) 20 and V fis die Hessian matrix . f , <, . I . 8 < f<.,) . ", .. V.- f x (4,20) .5 WO 2007/079361 PCT/US2006/062405 The "Newton" in the Trust-Region Newton-CG name comes from the fact that the Hessian matrix is used in the model. Some other. usually positive definite, matrix may be used instead, in which case Newton is dropped from the name. In the case of SBS, use of the Hessian matrix is con venient and allows for weakened convergence conditions to be used. If Vf(o) is sufficiently small i.e., less than some convergence threshold, then it is concluded that a local muinmum off occurs at x' and dithe process is finished. Otherwise, the model t, is minimized in a circular trust legion x "-, for some positive A,, via dithe CG-Steihaug method, described below. Let p be the resulting increment vector at which a minimum of the model in the trust region occurs. If I V/(x + P) is less than the convergence 10 threshold, then it is concluded that a local minimum offoccurs at -tp' and the process is finished. Otherwise. calculate the test value f (ji) - f o (2 Po = .(4.21) (0)- Ft (p If p passes a threshold, then the actual reduction and predicted reduction are somewhat close to one another aid the center of the trust region is moved to x = x p. Otherwise, the center of the trust region stays alt , = r fp is close to 1, their I, is considered a good model forf 15 within the trust region and the radius is increased. If pa is far away from 1, then the radius of the trust region is decreased. The new region radius is labeled A, a new model F centered at x is built. and the minimization process is repeated. i 't(x" + p ) [ is tested for closeness to 0. If it is not close enough, then the process is again repeated until V ( + p') is under the convergence threshold. A local minimum of is found at x2 -- p. Details about how to 20 calculate the model (in particular how to find f aUnd Xv f are given in Section 5, calculations needed for the CG-Steihaug method are given in Section 6, and a discussion about the convergence of HF(x ' + p
'
) His discussed in Section 7. 4.4.2 Pseudo Code 25 FIG. 14 shows a pseudo code listing of the described technique. 4.5 Computation of the Trust-Region Model The goal of this section is to build tools that will aid in the calculation of Function 4.18, the quadratic model used for the Trust-Region method. In particular, residuals are used to find formulas for f(x), VY(:) and V !(x) WO 2007/07936 1 PCT/US2006/062405 4.5.1 Using Residuals to Obtain the Function, its Gradient and its Hessiaii Recall that Q =n blid(P) fI SrM q j <}and define the residuals of Function 4.1 8 to he rcs iduMl typw (I lti( rk first E~ q) l s fth (x)\~ 5 for 04 <1 <C . Defu. ike residual vector of Function 4.18S to be the concateniatioa of all residuals ofltbe fir-st, se~cond. third and fourth kinds, as folow.s r (x) = <tp (xc), r (4. t ...I (X) >(42 Nowv the noriri squared of dhe rCsIdMiil \vcctor is broken down in sterns of intensity and curvature: (4.2 3) < ZI Thus. ~2 ) (4.24) 10 In fact. the residuals wen. choU mo tiat tihe above equation Bk tru. Formmlas for the gradienlt aund H-essian of/can also be derived htor dke rcsiduials- Lt V>(x) <. (4 ) Tlien .34 WO 2007/079361 PCT/US2006/062405 0 0 0 ".4.:- t'. (f 4) f(' (4.26) ax9 z -'~ z 4 (:1 <p 4 Thus, Vf~ ) = s-ry,)Vrs.) (4.27) A simiuna derivation leads to the followinMg formula for the Hessian: V " (X) r u IX( Vr(,) + 1 ±C r )V (W)) (4.28) where V ri is the residual gradient vector, as previously defined, and V r, is the residual lessian V.. .r.....(.) = "..... (4.29) Simple formulas will be derived below for each of -Ti (., (4.291) and Ox O x) (4,292) by breaking down the residual formulas and then taking derivatives. The view code is used to 10 simnplify the calculations. The resulting formulas show that these two values must be 0 except on a small set of known values. This information can be used to find Vr (x) and V 2 'r (A:), and in turn to find (r), V(x) and V f(x) . The net result is an easy way to calculate the Trust-Region model function (Function 4 18) .35 WO 2007/079361 PCT/US2006/062405 4.5.2 Residuals of the First Kind The ftonnula for intensity can be used to write residuals of the first kind as x(430) N] (qi) for 0 i < t. Using the quotient rule to take partial derivatives it follows that Oxy N(i )a .r : '... 4.W (4.31) and IN8 N1 q ) " ---- ----- ---- -A-i N ~o~ r a i - (4.32) N\o 1 ( ) 5 Key parts of Formulas (4.31) and (4.32) can be broken down into more manageable parts. The chain rule can be used on the definition of .N% (qi (4.321) j £- 'ji to obtain a D~ HJ(q)+(DH (4)) DH. (qj ) UDib4~)) -. .. N m (qji) - -- ---- (4.33) D• ./I + Dx .,(Q) + D 2 ,( ) 10 and taking the partial derivative of N\u (i) - (4331) leads to (q fl - D.IH. + q , -(A m4)£ - = D H..(q) +,-DH.( (4 34) Formulas (43 I) and (4 34) can be broken down even f their by finding formulas for the partial derivatives of DLHx(q) and D H, (qi). Recall that H,-v) . H(n, + X,, , (4.341) 15 where is the view code. The definition of the discrete first derivative in the ,-direction gives D: H, (qj) . . ....... = (435) 2e where 6 is the raster point (I 0), which means 0 1 (3 D, H (qi) (v -a .. v(...,) (436) 36 WO 2007/079361 PCT/US2006/062405 Similarly. D q ) = 5 (. .. . v(A... ,) (4 37) where 8 2 is the raster point (0, 1). The view code gives 89 1 Vi. e))-u =, = J evn In (4 38) 5 Now that the formulas for the firstand second partial derivatives for residuals of the first find have been broken down as much as possible, it is time to use the atomic pieces to calculate back up the chain. Substitute Formula (4.38) into Formulas (436) and (437) to calculate 2 xt 'A(4381) and T l 'h(4382) 10 Then substitute these into Formulas (4.33) mand (4.34) to calculate 5 7 (q (4 383) and '( i(qi) "). (4384) Lastly, substitute the results into Formulas (4.31) and (4.32) to calculate + i (5) (4 .3 85) and --- ' (:), (4.386) 15 as desired. Note in particular that r() = 0 if j {V(q + t),. V(q, ... d, Vq + 4), V(q . )} 4-) and OxaZ a., 'i- ,) } (440) w) 0 if j or kT . {,qg + Ji),t9 V -q . t), (q i + 4 ), V 2)} (4.40) 20 4.5.3 Residuals of the Second, Third and Fourth Kinds The second residuals are I)(qi + 6x) + 1t(qi - 6x) - 211i) S( = v H -() V ....... -
(
4
.
4 1) tfr 0 <;; i < t which gives 37 WO 2007/079361 PCT/US2006/062405 T- - Ti B a.-r- e + kAz) ... e: 8 4 z ; (z y ,..,4. + m y ,, ag - 2z. ,,).) (4.42) Use Fommla (4.38) for calculation of the above and note that 00 -r4(i) 0 if j qi{ (V(q + 4 ) V4q iq) }) (4.43) 5 and _12 ri+ () = 0 for all j and k (4.44) The third residuals are 0''' +62) + xv d,. 9N .i+2 -- 'V(q., a + XV( -- O 2xy , ) (4.45) 10 for 0 < i < t which gives +2 .) e O f.. .,.. + vy, 2 x ) (4,46) Use Formula (4.38) for calculation of the above and note that r2t( = 0 if j , {V( + ), V(q - 62), V(q)} (4.47) 15 and - r ... .:,.ts) = 0 for all j and k (448) The fourth residuals are -H(q + + ) +H ,(q - )+ 2~T q4- (q) ea()=VIDH ()= VA (4 49)U$ for)0 i<t where J~(qH) = h(qi + x) ± U(qi - 6.) + H(qi + 62) + x (q, - ), (4.491) 20 which gives 38 WO 2007/079361 PCT/US2006/062405 a t V'I. a 2 OX;.' (e) Or kXV( b *. 2(4.50) Use Fommla (4.38) for calculation of the above and note that 0.. rie-e z) if j & (V(q, + 61 + 6), V(q -' 6 Y-6) Vtn) (. TJ T- (4.51) V + 8), V(qA O) Vq + )21 VVq, - 6,) 5 and OX ra, = 0 fir l jif and k (4.52) Note that for first, second, third and fourth residuals, i.e., for 0 < i < 4/. •rdx) = 0 if j .... {2V(q4, V(q. 6. + 5, 3) , .... ir.. f. ........ (4, 5,39 Vi(qI ± <5), V(q- d-). V (q + 5)A Vt(q - is) and - r ( ) 0 if y or k V ) V(qi ) Vqi 4X), V(q -)} (44) tl, x . -0.4 ) V j - , u. {q< 1-.,(,4 10 4.6 Minimization of the Trust-Region Model: The CG-Steihatg Method 4.6.1 Overview At each step in flthe Trust-Region Newton-CC method, SBS uses the CG-Steihaug method to find a minimum of the model function it the trust region Conjtugate gradient methods try to 15 solve a linear system Ax -b, where A is a symmetricn matrix. The problem can be re-fonnulated as follows: ::: (Az-)} = -b.d Ax b z"Ai = -b" (455) 4 z As = b OW C.) == '.A + b' 1 s = 0 which puts the problem in terms of minimizing (x). Recall the Trust-Region model I ftorfat step i as 39 WO 2007/079361 PCT/US2006/062405 F s) f() + VJ( it ± + fJ){ r (4.551) 2 (x), A and b can be expressed in terms of the Trust-Region method as follows: 4(x) = F, (e) f (x ), (4 56) A -V' ') (4.57) 5 and b = Vf (
-
) (4,58) Note that A is, in fact. symmetric. In terms of the above. the function to be minimized by the CG-Steihaug method is 6.(c) = X (7, f (r")).r + (Vf'() Ta (4 59) 10 CG methods use the notion of coinjugate gradie-nt (CG) to build a sequence of vectors that converge to the minimum of 4(4. For a given i, a set of vectors D {d , d
-,
. , d } (4.591) called conjugate (the "C" in CG) with respect to a matrix A if (d{~~d (& (4.592) 15 for all k = 1. Given such. a set D at step i of the Thrust-Region method, define a sequencep" by .p 0 and for/ 0 ,}+ p + t .i.d (4.60) where a, is one-dimiensional minimlizer of Nx) along X = p'+ ad . (4.601) 20 The sequence p converges to the desired minimumpi in at most m + n steps (where, as before, m + n is the dimension of the objective function). Now the goals are how to build the set D imd h.ow to find a, fior each.j 0. Let d V = - f(X ) -b. (4.602) 25 40 WO 2007/079361 PCT/US2006/062405 In other words, choose the first direction in which to search for a minimum to be the direction of steepest descent. This is the direction determined by the gradient (the "'G"' in CG). The residual of the system is defined as ix) = Ax + b, and is used at each step to determine the next direction in which to search. Since p'C 0, then the residual at step 0 is r = b and for j :0 = A(. a d') + b (4.61) S (Ap, + b)+ -i ,jAd
'
i 5 W J+ cr, _.dt For/j : 0 define d"" as di+ i ,ri + ftli Wl (4.62) where S "- (4.63) ( ii'r ,I ,, J4+ (ri )rr T1 10 Such a choice for A.: results in a d" such that 4' x)'! Ada4 = 0 (4631 for all k <j/ + 1, which guarantees that the set D is conjugate for each j To calculate the minimizer a, find an expression fotr along 6 pQ' + a" by multiplying out tens and rearranging according to powers of cX: ..... . 7"' . : .* rVC' 1,"" 4. " >'. ± 'V(,''} 2 . ' . 1 (p ' + d ) (p + . d. )-- (( ' .... + d 4') b (pi y' ) - , (( )......"Ap (p'' 4 d (d. -di'Ad . + (i" .+- '") (4.64) 15 where K is constit with respect to a. Next, a derivative is taken with respect to cu: ,, e (f',iAd p ) A +b1 ±:)d d;i (4.65) .(d) Ad , + (ip bJ) d', Set the above quantity equal to 0 to find the critical point: 41 WO 2007/079361 PCT/US2006/062405 ..... b (4.66) (di 1)T Adi In order to ensure that the critical point corresponds to a minimum, the second derivative is taken to find concavity: = (d A d (4.67) It is not known if the above quantity is positive. The case <(di 'A _ 0 (4.671) is handled later, and the derivation continues assuring thai S(di ( A >. ,. (4.672) 10 The fonula for a can be xr-itten in a better xay using the fact that for each j> 0, ( ij)T i,k. ... = 0 (4.673) for all k <j, a hact that will nowv be proved by induction. Forj 1, (r
)
0 di, r o + a do ) r , b -asa~? (b) -b'b +asb'4 + (4.68) +~~ )inr 4m (d' °)" 'A(&d','O 42 1 5 Now assume (H> (4.681) for k <j-- 1, T1hou V,/Vj tkI(d:) 44qZ,k (4.69) 4 2 WO 2007/079361 PCT/US2006/062405 for k <j - 1 where (d ) 7 = 0 (4.691) by the induction hypothesis and (d rAd . 0 (4 692) 5 by the conjugacy of D. Now handle the k =j/---- I case: j -)TrSiJr~i, 1)T-- ig 1 . - .(4,70) =-0 Thus, for each j > 0, -T , ik 0 (4.701) 10 for all k <j, A new expression for a can now be presented: ," { , 4 f. , :' (4,71) J (r (Tr') To slummarize, the CG-Steihaug method is used at step i in the T'frust-Region Newton-CG method to find a point at which a minimum of the model function occursm in the trust region. 'Three sequences are used to do this. They are r and d' is a squnce of conjate adient 15 direction vectors, r is a sequence of residuals, and p converges to the desired minimum p The conjugacy of d& guarantees convergence in at most m n. steps. 43 WO 2007/079361 PCT/US2006/062405 The starting quantities for the sequences are p = 0, r -Vf.() ::: b, and d = -Vf(.
) = b. Subsequent entries in the sequences are given by using the helper formulas tj = (J (4.72) (d d)7 Ad' and ri, J 1T 7i,)i .. 1 ) . (4.73) 5 and are piJ+ -i + d (474) i =r r + (4.7Ad5) awd (f . =, . + i, d (4.7 6) 10 Up to this point, the method described is a standard CG method. The sequence p ' is generated until the residual lals under some threshold. The Steihaug variant of the CG method takes into account the cases of (d.,J) "Ad , < 0, (4.761) which would violate fie assumption that fie c. calculated corresponds to a minimum, and the 15 minimum of the model being found outside the area of interest, i.e., the given trust region. To handle these cases, two extra stopping criteria are added. If (, ' 4d < (4762) or if i d
+
1 > .i (4.763) 20 then the intersection of tle trust bomundan and direction d' l is assigned as the final point p . If p " 0, then li +I < I l + (4.764) fbr each > 0, meaning that returning a solution of intersection with the trust region boundary is the best that the sequence can do if the boundary is reached. The Trust-Region method then uses 25 p' to calculate the test value. Depending on the results, either the solution is seen as good enough 44 WO 2007/079361 PCT/US2006/062405 or the T'rust-hRegion method goes into iteration. i + 1, with the center of the Tirust-Region x being either the same as x or moved to x +p 4.6.2 Pseudo Code In the pseudo code that follows, the sequence of points p - is denoted by min pt, the sequence of directions d is denoted by direction, and the sequence of residuals r;* is denoted by residual. "a dot b" is used to denote a'b. 4.6.3 Sparse Matrix Multiplication By far the most expensive operation in each iteration of the CG-Steihaug method is the multiplication ofA with the direction vector, Recall the following fornmula for A in terms of 10 residuals of the objective fuimnction: 4 .- 1. A= Vf (r) = (r,)Vr(a) + rd() 2 rd(x)) (4.77) Let Ai( x) r ()Vr () + rjr)Vr. J) (4,771) for each i Let (a) a be the (j k) element ofd and let e the vector of length m + n such that I i i and C = 0 otherwise. (4.772) 3 . 15 Suppose that d is one of the conrjugate direction vectors buit by the CG-Steihaug method at a fixed step of the Trust-Region method. T'lhen 41t Ad = A ,d z:=0 4 . - . .
(4.7 8) i::0 : , :0 which requires (m + n)" mutiplications, one for each pair (j, k). However, the calckdation can be 20 done more efficiently by using Equations (4,53) and (4,54), These show that for each t,(a).. is nonzero only if j and k are both in V { VCq, (qg + d( + 1, - (? 1( - Vq + ), V(q - A}. (4781) Thus. for each i and k S(njd(a kdkk 47)45 45 WO 2007/079361 PCT/US2006/062405 which can be computed in 0 (1) utiplicheations. The overall order is then reduced from (m + n) to (m + ,). 4.7 Convergence of the Trust-Region Newton-CG Method The sequence x built by a Trust-Region method satisfies IV f (X) +l 0 (4.791) 5 as i ---- if the following conditions hold: * At each step i of the Tru:t .gion met hod, the p b It by the r1iuhimation 4lgorkirh tW ie F() (p') 0 V ,(")-(.(4,792) for some co"nstant £1 0, . * he sene p b by the miimzation algorithm satisfies p e- (4.793) fo " a consant s.2 * 1 e tresc~dIo~~.s ~. ( entr O d' "r~* ta r c~eme hi he nteval(0,-'4 * Th theshki or mvin th ceterof the tlrust regin is onkie in Ohe intlerva l (0, ) * i i- ix:wined from wow on dhe level e L (r Kfs - f.} s. fw some ctt S(s ) " e(4,794) 'or al x s 3cd flh f s -f() *t is x:<unded, La < " L 3. -- ) . e (4.795) for someasww gaR * T here e~nx$t$ < constatt Fs 2 SUchthiat (4.796) for all , 0, * f is Lpschite ontinuousl dlifferenttibe on L, e. there exists sinse csutant cG ER such that for nyLV.f ( ) -V.f ) , , , - .(4 .7 9 7) Each of the above claims is proved below. 10 4.7.1 Model Estimate SBS uses the CG-Steihaug method at each step i to find a point p* = Hni ' (4.798) I -- > C X; that corresponds to a minimum of the model P: in the trust region. At step i, a Cauchy point (p) is defined to be 46 WO 2007/079361 PCT/US2006/062405 S. V X ) (4.80) VV where 1. if (V f!(KT')) V f-(at )Vf( " ) & 0 _____=__1_7( ___) __P_(4.8) minf (,))' IT1) otherwise .g(V f )7Vf (W)'V f (X) It follows that: 1WV f (PI ) 15(0) - 1 ((,)) > Vf (p uIinA i'V, A). (4.82) 5 The CG -Steihaug method gives __ + pl- p nof (V f( ) V f (fi (-V, f ()) )'IVI f ( ) ( V f (, )) VV f (a ) Vf f Thus, F)() - Fi (p' ) - F ( -/.{.' F (p: ) (4 84)) 1j Vp ) ) F (0 ) F--- V ) > G ?o .r O" (4 .84) V f{ V (p') mi(6.. ) 10 4.7.2 Bound on Model Minimum p' < A since for any jif the p - computed by tlve CG- Steihaug algorithm fils outside of the trust region, then it is replaced with a vector inside and the algorithm returns that value. 4.7.3 Trust Region Center Move Threshold Range Enforcing a range on the threshold to move the center of the trust region is a matter of 15 setting it properly. In the pseudo code, the move threshold is set to 0.1 e (0, ). 4.7.4 Bound Below off on the Level Set Recall that 47 WO 2007/079361 PCT/US2006/062405 f -) K ( eq -I (q,) + AC(q )) (4.841) By definition OU, (qi) > 0 (4.842) for each q,. and by choice A > 0 F Ihertefore, f(x) ' 0 for all x R'"> and thus also for x such S that f(x) 5;< f(, ). 4.7.5 Bound on the Level Set Let x + XV iq 2xj. Wi x) =,a _____+______ __. (4.85) amd H(qj + St) + H(qg - 6t) - 2.H(q) (4.86) h., :---- (4.86.) 10 where- 3 (1, 0). Define h by h1k (D6 +H 2 qf ) H Hj(q t X 4-+ H (q, - 6:t) +, us;j., -- 2(Hq ) zyg ) .h vi ) H(= ( ) + . r ( 4) (88) and v by 0:: arid 15 h ( x. ) f ( x ) for all x, so if there exists a C4 > 0 such that x . 1 riand t > ca implies h(tx) > (x), then x I and t > c.. would imply f(t/x) > (x), meaning that fQx(t) A f(x") would imply x I and t < ca, i.e , x < c , as desired. 48 WO 2007/079361 PCT/US2006/062405 For any scalar s > 0 and for each j 0~0 v where kg, is constant with respect to s, which implies V (4S) = ( + v()) (4.891) 50 where k is constant with respect to s. For any scalar i > 0, let h,(i) = h (tx). Then ' t " " h (t)"+ (s d- 34 -h~ 4z-(add 4?4 ($ 8JOO VM~~a Vh KA -V m f d.8s (4.90) h is a sum of squares, so h . Give n icrment vector ut:t:, h (ix) .0 otnly if al 49 WO 2007/079361 PCT/US2006/062405 D(''Hj,,(qj) =o0 (4,901) This would imply t is the zero vector because increment vectors are required to be 0 on the boundary of the projected surace. Any lmne of pixels through the projected surface in the n-direction would have to be 0 on the boundary and could not vary from 0 since DH-a(q) = 0. (4.902) Since x = 1 and t > 0, then h x() > 0, which implies ((k + Vv(r)) x) > 0. (4,903) Let K = it ((k+ V ) (4.904) 10 and (4.905) Then for my t > c and x I= 1, k Ve)))2fz y (4,91) <(Ik V?(..)) .) (2(+)f rmo_, = ~~~~~~~~~~~~... .. "... ' ....... •=........ :"".........."..K•;; z (k+ Vex):r/kV~) x) K f\ (xo( - 1) ----- 1) > 0 which implies h(t;) > f(A o ) (4,92) 15 4.7.6 Bound on Hessian In the Trust-Region Newton-CG method there are two possibilities for x ~ ' Either x x , in which case f(x "/)f(x), or x x+ p, where p' is the point resulting from the CG-Steihaug meth od, The latter case is only pennrmitted if the model test valisue 50 WO 2007/079361 PCT/US2006/062405 f(afI f( + p) P = (4.921) 1; 0) -P7(p is greater than the threshold to move the center of the trust region. The threshold is chosen in (0, ), so if the center of the trust region is moved then p, > 0. Since the CG-Steihaug method minimizes the model in the trust region, then 5 F.: 0 > , F i ( . ,( (4.922) is guaranteed, which implies that the denominator of p, is nonnegative. If p, > 0, then f ) f i + > 0. (4,923) Thus f(X) -f(9" )f (4.924) 10 for all 2 0, which means that L1 includes all x. If the claim is proved on L, then it will also be proved for aIl x, fand all of its partial derivatives are continuous. On the compact set [0, ca4 (where ca is the bound on L calculated above), and all of its derivatives are hounded. For each I and 2 0 and < m + n there exist constants k,. such that p-:---<-- f k(: (4.925) 15 for all x such that x (e [0, c-.,, and thus .............................................................. . ... ................... ............ S. ... • \ ".... " , " " '(4,93) 4.7.7 Lipschitz Continuous Differentiability of f Let k-, he as in the previous section, and let x, L. Fix / and let t , = y (4.931) () = + t (4.932) 20 and h v...t). (4.933) Then 51 WO 2007/079361 PCT/US2006/062405 -f () f (y h 1- ) - h( .......... 0....., ,Iv f (1W~d a'"u{ , f 7) (4.941) ... ('v - V f'. f . ... (uch a dt 40 d.. tV .(4.94) 74 9 <V -E f (.,.) - ad i~ ~~~ .. a.; V . .. ......................... 4.8This shows thation - f() (4.941) is Lipschitz continuous for each 1, which means FIGS 16A and 16B set forth tables 500a and 500b providing, for convenient reference, a listig of mathematical notation used in describing systems and tedmiques according to aspects 10 of the present invention. 52 WO 2007/079361 PCT/US2006/062405 V, Flowcharts of Generalized Methods FIGS. 17-22 show a series of flowcharts illustrating a generalized method 600 and sub-methods 620, 640, 660, 680, and 700 according to the above-discussed aspects of the invention for geerating a geometrical model representing geometry of at least a portion of a surface of a thiree-dimensional (3d) object by shading by an operator in connection with a two dimensional (2d) image of the object, the image representing the object as projected onto an image plane. The generalized method 600 shown in FIG. 17 comprises the following steps: Step 601: Receiving shading information provided by the operator in connection 10 with the image of the object, the shading information representing a change in brightness level of at least a portion of the image. Step 602: Generating, in response to the shading infonnation. an updated geometrical model of the object, the shading information being used to determine at least one geometrical feature of the updated geometrical model. 15 Step 603: Displaying the image of the object as defined by the updated geometrical model. As discussed above, the generalized method 600 can operate upon a digital input of any hierarchical subdivision surtace, polygon mesh or N URBS surface. Genenalized method 600 mnay include sub-nmethod 620 shown in FIG. 18, comprising the 20 flowing steps: Step 621: Once a subdivision surface has been generated and displayed to a user, matching the subdivision surface to a 2d model view, the 2d model view including information about grid corners, grid width antd height, pixel size aid camera to object trmansforatiaon. Step 622: Utilizing 2d model view to sets a lighting direction, tne input parameters and 25 shade, thereby .modifying the intensities of selected pixels; or load a set ofpre-shaded pixels. This information is then utilized by a shaping algorithun. The parameters ca comprise any of(a) influence over how much subdivision occurs in the area of modification and (b) infl uoence over how pronounced the geometrical modifications are. Step 623: Determining the correct geometric alterations to make to the surface, adding 30 surface primitives where needed via subdivision in the area of the shading, in order to ensure that sufficient detail is present: and deteminie a height field that reflects in 3D the changes that were requested in the 2D setting, altering the subdivision surface to reflect the determined height values, thereby resulting in a shaped, hierarchical subdivision surface that can be altered fiirther, 53 WO 2007/079361 PCT/US2006/062405 saved, or converted to a desired output surface type. lThe surface primitives can include Many of triangles, quadrilaterals, or other polygons. Generalized method 600 may also include sub-method 640 shown in FIG. 19, comprising the following steps: Step 641: Creating an underlying subdivision surface. Step 642: Displaying a 2D1) shade view. Step 643: Enabling a user to set lighting, shading and tune parameters: Step 644: Executing a shaping process comprising (a) introducing detail on the surface; (b) determining new height parameters for the surface; and (c) shaping the subdivision surface: 10 thereby generating a 3D subdivision surface. Tihe parameters can comprise any of(a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced ithe geometrical modifications are. Generalized method 600 may also include sub-method 660 shown in FIG 20, comprising the following steps: 15 Step 661: Receiving an input comprising either a mesh representation or a NURBS surface. Step 662: Converting the input to a hierarchical subdivision surface if it is not already one. Step 663: Perfonning shading and shaping on the hierarchical subdivision surface. 20 Step 664: Utilizing adaptive subdivision to add detail to the surface, and analysis and synthesis to propagate changes to all levels of the surface, thereby allowing for modifications at selected levels of detail. Step 665: Providing a hierarchical subdivision surface library. Step 666: Converting the subdivision surface model resulting from the SBS process, to 25 another surface type if desired. GCienerali zed method 600 may also include sub-minethod 680 shown in FIG, 21 , comprising the following steps: Step 681: Applying a selected shaping operation, the selected shaping operation being configured to attempt to produce a set of height increments F' over the model view that rnimizes 30 the function given by: f(r)= } 4 uA + AC wr(14v)) where I is the discrete intensity at the pixel (u, v), ( is a unit vector that points in the direction of the infilitely distant simulated light source, and H is the associated height field, C',y dentes the 54 WO 2007/079361 PCT/US2006/062405 curvature of a surface with associated height field H, A2 is a smoothing coefficient, and the sum is performed over pixels in the model view that intersect the interior of the projected surface. Step 682: reducing the function to the unmconstrained minimization of Is)=) OK $ (t-I44)Wf + AGa(qs)) 5 wherein the medod used to perform the minimization is a trustbregion method. Step 683: Performing further reduction from summing over all the pixels in the model view that intersect the interior of the projected surface to summing only over that set reduced by inteMrsecting it with the neighborhood of modified pixels such that the calculation need not be 10 made over the entire projected surface as seen in the model view, the reduced set being referred to as Q Generalized method 600 may also include the following sub-method 700 shown in FIG. 22, comprising the following steps: Step 701: modeling the function by the quadratic function: F~)=f (r) + VItxo)% + tTVV'f(o~z 15 where V" is the gradient vector of/mand V/f is the Hessian matrix off. Step 702: Minimizing the model F in a selected region. In the present example, the selected region is x[ 5 A, for some A > 0. Step 703: Implementing minimization utilizing the CG-Steihaug method with a special 20 sparse mnatrix multiplication, Step 704: Constructing a test value from the resulting minimum point x i and is: f() - + ) and wherein if p is close to 1. then F is considered a good model for fwithin the trust region, the center of the trust region is moved to x and the trust region radius is increased; or, if p is far 25 away from 1, then the radius of the trust region is decreased. Step 705: repeating the process until a minimum forfin the trust region is found based on established criterion. In the present example, the criterion to stop the process is if [Vf is sufficiently small at the center of the current trust region, wherein a local minimum offhas been attained. 55 WO 2007/079361 PCT/US2006/062405 It should be noted that the above described generalized method and sub-methods may be implemented as a computer software plug-in product adapted for interoperability with any of a computer-assisted design (CAD) system, a computer graphics system or a software application operable to create, display. manipulate or model geometry. The plug-in product features may include any of a shading tool with a 2d paint function and the ability to load and save shadings, light controls, parameter tuning, updating of surface shape based on shading information, light direction and input parameters, an undo/redo function internal to the modifier, a tool ftor selecting an area to be updated, utilizing a masking technique, and a selection tool with a set of standard subdivision surface manipulations, wherein the parameters can comprise any of (a) influence over 10 how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are. hn addition, as described above, the generalized method and sub-methods may include the ability to run the SBS process on polygon meshes or NURBS surfiaces without first converting to or using properties of subdivision surfaces. The method anid sub-methods may father include the 15 ability to display large, complex meshes at an interactive rate, the ability to trim arbitrarily across surface tf ces, and/or the ability to sketch contour lines to produce an initial 3d shape, useable in conjunction with tihe SBS modeling process. While the foregoing description includes details which will enable those skilled in the art to practice the invention, it should be recognized that the description is illustrative in nature and 20 that many modifications and variations thereof will be apparent to those skilled in the art havin g the benefit ofthese teachings. It is accordingly intended that the invention herein be defined solely by the claims appended hereto and that the claims be interpreted as broadly as permitted by the prior art. 56

Claims (27)

1. A computer implemented graphics method for generating a geometrical model representing geometry of at least a portion of a surface of a three-dimensional (3D) object by shading by an operator in connection with a two-dimensional (2D) image of the object, the image representing the otect as projected onto an image plane, the method comprising: A, receiving shading information provided by the operator in connection with thle image of the object, the shading information representing a change in brightness level of at least a portion of the image; B. genlerating, in response to the shading infonnation, an updated geometrical model of the object, the shading information being used to determine at least one geometrical feature of the updated geometrical model; C displaying the image of the object as defined by the updated geometrical model; and D. wherein the method can operate upon a digital input of any hierarchical subdivision surface, poly gon mesh or NURBS surface.
2. The method of claim I wherein once a subdivision surface has been generated and displayed to a user, it is matched to a 2D model view.
3. The method of claim 2 wherein the 2D model view includes information about grid corners, grid width and height, pixel size and camera to object tra nsformation.
4. The method of claim 3 wherein. utilizing the 2D msxodel view, the user sets a lighting direction, tunes input parameters and shades, thereby modifying the intensities of selected pixels, or loads a set of pre-shaded pixels, and this information is then utilized by a shaping algorithm: and wherein the parameters can comprise any of (a) influ ce over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are,
5. The method of claim 4 wherein: the shaping algorithm determines the correct geometric alterations to make to the surface; additional surface primitives are added where needed via subdivision in the area of the shading, in order to ensure that suflicient detail is present: and whereiin a height field is determined that reflects in 3D the changes that were requested in the 2D setting, and the subdivision surface then altered so as to reflect the determined height values; thereby resulting in a shaped, hierarchical subdivision surface that can be altered further saved, or converted to a desired output surface type; and wherein surflice primitives can include any of triangles, quadrilaterals, or other polygons. 57 WO 2007/079361 PCT/US2006/062405
6. The method of claim I further comprising: creating an anderlying subdivision surface; displaying a 2D1 shade view; enabling a user to set lighting, shading, and tune parameters; and executing a shaping process, the shaping process comprising: introducing detail on the surface; determining new height parameters for the surface; and shaping the subdivision surface; thereby generating a 3D subdivision surface, wherein the parameters can comprise aly of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
7, The method of claim 6 further comprising: receiving an input comprising either a meth representation or a NURBS surface; converting the input to a hierarchical subdivision surface if it is not already one; and performing shading and shaping on the hierarchical subdivision surthefce.
8. The method of claim 7 further comprising: utilizing adaptive subdivision to add detail to the sur-ihee, and analysis and synthesis to propagate changes to all levels of the surface, thereby allow ing for modifications at selected levels of detail.
9, The method of claim 6 further comprising: providing a hierarchical subdivision suTface library.
1 0. The method of claim 9 further conprising: converting the subdivision surfIce model resulting from the SBS process, to another surface type.
I I. The method of claim 6 further comprising: applying a selected shaping operation, the selected shaping operation being configured to attempt to produce a set of height increments I- over the model vieow that minimizes the function given by: 58 WO 2007/079361 PCT/US2006/062405 fM() = (11(+r( vI) - t 1 104t DI + Ac uruV)) where I is the discrete intensity at the pixel (u, v), I is a unit vector that points in the direction of the infinitely distant simulated light source, and H is the associated height field, C,: denotes the curvature of a surfathce with associated height field H, 7 is a smoothing coefficient, and the sum is performed over pixels inl the model view that intersect the interior of the projected surface.
12. The method of claim 11 further wherein the set of height increments F can be reduced to a vector v conltaining one entry for each pixel or connected area whose corresponding height value(s) may be altered by the shaping algoritlim, such that the unction is reduced to the unconstrained minimization of fix) en >2 "'+ A(7&4 and further wherein the method used to perform the minimization is a Trust-Region method.
13. The method of claim 12 comprising a further reduction from summing over all the pixels in the model view that intersect the interior of the projected surface to sunuming only over that set reduced by intersecting it with the neighborhood of modified pixels, such that the calculation need not be made over the entire projected surface as seen in the model view, lthe reduced set being referred to as Q.
14. The method of claim 12 wherein the Trust-Region method comprises first modeling the function by the quadratic function: F(T, fizQ) + V fi eX' X + (V 2 ) where V is the gradient vector of .fand VIf is the hessian matrix off; and then minimizing the model F in the region Ix A. for some A:> 0.
15 The methods of claim 14 further wherein: 59 WO 2007/079361 PCT/US2006/062405 the method utilized to implement the minintization is the CG-Steihaug method with a special sparse matrix multiplication; a test value is constructed from the resulting mninimum point x and is: f (se)) - f (X-4 + st0 and wherein if p is close to 1. then F is considered a good model for fuiihin the trust region, the center of the trust region is moved to x and the trust region radius is increased; or, if p is far away from 1, then the radius of the trust region is decreased; and the process is repeated, until a mniinrnum for F. in the trust region is found; and wherein the criterion to stop the process is if :Vf[j is sufficiently small at the center of tihe current trust region, wherein a local minimmurn off'has been attained.
16 The method of claim 15 thrther comprising implementing the method as a computer software plug-in product adapted for interoperability with any of a computcr-assistcd design (CAD) system, a computer graphics system or a sofivare application operable to create, display, manipulate or model geometu,
17. The method of claim 16 further wherein the plug-in product ICeatres include any of: a shading tool with a 2D paint function and the ability to load and save shadings, light controls, parameter tuning, updating of surface shape based on shading information, light direction and input parameters, an undoerIdo function internal to the modifier. a tool for selecting an area to be updated, utilizing a masking technique, and a selection tool widl a set of standard subdivision surface manipulations, wherein the parameters can comprise any of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
18. The method of claim 10 further comprising the ability to run the SBS process on polygon meslhes or NURBS surfaces withoutt first converting to or using properties of subdivision surfaces,
19. The method of claim 1.0 further comprising the ability to display large, complex meshes at an minteractive rate.
20 The methods of claim 10 further comprising the abilitY to trim arbitrarily across surface flces. 60 WO 2007/079361 PCT/US2006/062405
21 The method of claim 10 further comprising the ability to sketch contour lines to produce aln initial 3D shape, useable in conjunction with the SBS modeling process.
22. A computer graphics system for generating a geometrical model representing geomeiy ofat least a portion of a surface of a three-dimensional object by shading by an operator in connection with a ltwo-dimensional image of the olect. the image representing the object as projected onto an image plane, the computer graphics system comprising: A. an operator input device configured to receive shading infomiation provided by the operator, the shading information representing a change in brightness level of at least a portion of the unage; B. a model generator configured to receive the shading information from the operator input device and to generate in response thereto an updated geometrical model of the object, the model generator being configulred to use the shading information to determine at least one geometrical feature of the updated geometrical model; and C. an object display configured to display the image of the object as defined by the updated geometrical model; and D. wherein the system can accept any hierarchical subdivision surface polygon mesh or N U RBS surface.
23. A computer program product operable within a computer graphics system, the computer graphics systemni comprising a human-useable input device and a display device operable to generate a human perceptible display, the computer program proluct comprising computer software code instructions executable by the computer graphics system and encoded on a computer readable medium, the computer program product being operable within the computer gaphics system to generate a geometrical model representing geonimetry of at least a portion of a surface of a three dimensional object by shading by an operator in connection with a two-dimensional image of the object, the image representing the object as projected onto an image plane, the computer program pr(xtuct comprising: A. first computer software code means operable to receive shading information provided by an operator using an input device, the shading information representing a change in brightness level of at least a portion of the image; B. modtei generator computer software code means operable to receive the shading infbnmation from the operator input device and to generate in response thereto an updated geometrical model of the object, the nimlel generator computer software code means being operable to use the 61 WO 2007/079361 PCT/US2006/062405 shading information to deie:rtine at least one geometrical eature of the updated geometrical model; and C, object display computer software cxde means configured to enable the computer graphics system to display, oil a display device, the image of the object as defined by the updated geometrical modet and D. whxerein the computer program product is operable to accept any hierarchical subdivision surface polygon mesh or NURBS surfiaee.
24 The method of claim 14 further comprising minimizing by using a Tnrust-Regionl Newton-CO method,
25. The method of claim 24 wherein residuals are utilized to obtain the fiction, its gradient and its Hessian.
26. The method of claim 14 wherein calculations are integrated with or into an AP.
27 The method of claim 14 lwherein calculations are integrated with or into a computer software application plg-in. 62
AU2006332582A 2005-12-20 2006-12-20 Modeling the three-dimensional shape of an object by shading of a two-dimensional image Abandoned AU2006332582A1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US75223005P 2005-12-20 2005-12-20
US60/752,230 2005-12-20
US82346406P 2006-08-24 2006-08-24
US60/823,464 2006-08-24
PCT/US2006/062405 WO2007079361A2 (en) 2005-12-20 2006-12-20 Modeling the three-dimensional shape of an object by shading of a two-dimensional image

Publications (1)

Publication Number Publication Date
AU2006332582A1 true AU2006332582A1 (en) 2007-07-12

Family

ID=38228928

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2006332582A Abandoned AU2006332582A1 (en) 2005-12-20 2006-12-20 Modeling the three-dimensional shape of an object by shading of a two-dimensional image

Country Status (5)

Country Link
EP (1) EP1964065A2 (en)
JP (1) JP2009521062A (en)
AU (1) AU2006332582A1 (en)
CA (1) CA2633680A1 (en)
WO (1) WO2007079361A2 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8984447B2 (en) 2011-03-11 2015-03-17 Oracle International Corporation Comprehensibility of flowcharts
US11679053B2 (en) 2017-02-17 2023-06-20 Medtec Llc Body part fixation device with pitch and/or roll adjustment
US11712580B2 (en) 2017-02-17 2023-08-01 Medtec Llc Body part fixation device with pitch and/or roll adjustment
EP3692329B1 (en) * 2017-10-06 2023-12-06 Advanced Scanners, Inc. Generation of one or more edges of luminosity to form three-dimensional models of objects
US11433480B2 (en) * 2018-03-23 2022-09-06 Lawrence Livermore National Security, Llc Additive manufacturing power map to mitigate overhang structure
JP7007495B2 (en) 2018-03-26 2022-01-24 メドテック インコーポレイテッド Simple on / simple off clips or clamps for installing the mask on the body part fixing device
US10475250B1 (en) 2018-08-30 2019-11-12 Houzz, Inc. Virtual item simulation using detected surfaces
CN110262865B (en) * 2019-06-14 2022-07-12 网易(杭州)网络有限公司 Method and device for constructing game scene, computer storage medium and electronic equipment
US10922884B2 (en) 2019-07-18 2021-02-16 Sony Corporation Shape-refinement of triangular three-dimensional mesh using a modified shape from shading (SFS) scheme
CN113435773B (en) * 2021-03-16 2022-10-21 明度智云(浙江)科技有限公司 Production progress monitoring method, system and storage medium for digital factory

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5255184A (en) * 1990-12-19 1993-10-19 Andersen Consulting Airline seat inventory control method and apparatus for computerized airline reservation systems
US6313840B1 (en) * 1997-04-18 2001-11-06 Adobe Systems Incorporated Smooth shading of objects on display devices
US6487322B1 (en) * 1999-03-03 2002-11-26 Autodesk Canada Inc. Generating image data
US6449560B1 (en) * 2000-04-19 2002-09-10 Schlumberger Technology Corporation Sonic well logging with multiwave processing utilizing a reduced propagator matrix
US20040215429A1 (en) * 2001-04-30 2004-10-28 Nagabhushana Prabhu Optimization on nonlinear surfaces
JP4075039B2 (en) * 2002-02-19 2008-04-16 株式会社セガ Texture mapping method, program and apparatus
US7167173B2 (en) * 2003-09-17 2007-01-23 International Business Machines Corporation Method and structure for image-based object editing
US7720269B2 (en) * 2003-10-02 2010-05-18 Siemens Medical Solutions Usa, Inc. Volumetric characterization using covariance estimation from scale-space hessian matrices

Also Published As

Publication number Publication date
CA2633680A1 (en) 2007-07-12
WO2007079361A2 (en) 2007-07-12
JP2009521062A (en) 2009-05-28
EP1964065A2 (en) 2008-09-03
WO2007079361A3 (en) 2008-04-03

Similar Documents

Publication Publication Date Title
AU2006332582A1 (en) Modeling the three-dimensional shape of an object by shading of a two-dimensional image
JP4494597B2 (en) Representing color gradation as a detail-oriented hierarchical distance field
JP4563554B2 (en) How to sculpt objects represented as models
US6707452B1 (en) Method and apparatus for surface approximation without cracks
US6396492B1 (en) Detail-directed hierarchical distance fields
JP4083238B2 (en) Progressive mesh adaptive subdivision method and apparatus
US20070262988A1 (en) Method and apparatus for using voxel mip maps and brick maps as geometric primitives in image rendering process
Sander et al. Signal-specialized parameterization
Cohen et al. Simplification envelopes
JP4237806B2 (en) Progressive mesh adaptive subdivision method and apparatus
Alliez et al. Isotropic surface remeshing
Pajarola et al. Quadtin: Quadtree based triangulated irregular networks
Alliez et al. Centroidal Voronoi diagrams for isotropic surface remeshing
US20070018988A1 (en) Method and applications for rasterization of non-simple polygons and curved boundary representations
GB2419504A (en) Perspective editing tool
Sheffer et al. Smoothing an overlay grid to minimize linear distortion in texture mapping
US20070103466A1 (en) System and Computer-Implemented Method for Modeling the Three-Dimensional Shape of An Object by Shading of a Two-Dimensional Image of the Object
US6724383B1 (en) System and computer-implemented method for modeling the three-dimensional shape of an object by shading of a two-dimensional image of the object
Campen et al. Walking on broken mesh: Defect‐tolerant geodesic distances and parameterizations
JP2002329215A (en) Method for generating adaptively sampled distance field of object
Owen Nonsimplicial unstructured mesh generation
JP2002334346A (en) Method for converting range data of object to model of the object
EP2062223A2 (en) Computer graphics methods and systems for generating images with rounded corners
AU744983B2 (en) System and computer-implemented method for modeling the three-dimensional shape of an object by shading of two-dimensional image of the object
Pastor et al. Graph-based point relaxation for 3d stippling

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted