EP1661116A1 - Improved paint projection method and apparatus - Google Patents

Improved paint projection method and apparatus

Info

Publication number
EP1661116A1
EP1661116A1 EP04801835A EP04801835A EP1661116A1 EP 1661116 A1 EP1661116 A1 EP 1661116A1 EP 04801835 A EP04801835 A EP 04801835A EP 04801835 A EP04801835 A EP 04801835A EP 1661116 A1 EP1661116 A1 EP 1661116A1
Authority
EP
European Patent Office
Prior art keywords
dimensional
view
image
dimensional object
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP04801835A
Other languages
German (de)
French (fr)
Other versions
EP1661116A4 (en
Inventor
Thomas Hahn
Rick Sayre
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixar
Original Assignee
Pixar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixar filed Critical Pixar
Publication of EP1661116A1 publication Critical patent/EP1661116A1/en
Publication of EP1661116A4 publication Critical patent/EP1661116A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/016Exploded view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present invention relates to computer animation. More specifically, the present invention relates to enhanced methods and apparatus for specifying surface properties of animation objects.
  • Stop motion-based animation techniques typically required the construction of miniature sets, props, and characters. The filmmakers would construct the sets, add props, and position the miniature characters in a pose. After the animator was happy with how everything was arranged, one or more frames of film would be taken of that specific arrangement. Stop motion animation techniques were developed by movie makers such as Willis O'Brien for movies such as "King Kong” (1932). Subsequently, these techniques were refined by animators such as Ray Harryhausen for movies including "The Mighty Joe Young” (1948) and Clash Of The Titans (1981).
  • Pixar One of the pioneering companies in the computer aided animation (CAA) industry was Pixar, dba Pixar Animation Studios. Over the years, Pixar developed and offered both computing platforms specially designed for CAA, and Academy- Award® winning rendering software known as RenderMan®.
  • CAA computer aided animation
  • Pixar has also developed software products and software environments for internal use allowing users (modelers) to easily define object rigs and allowing users (animators) to easily animate the object rigs. Based upon such real-world experience, the inventors of the present invention have determined that additional features could be provided to such products and environments to facilitate the object definition and animation process.
  • One such feature includes methods and apparatus for facilitating the definition of surface properties of objects.
  • the present invention relates to computer animation. More specifically, the present invention relates to methods and apparatus allowing a user to specify surface parameters of an object or portions of an object that are in different poses.
  • Embodiments of the present invention are used to help manage the process of the creating of three dimensional "paintings.”
  • Embodiments control the definition of multiple poses, manage the rendering of views, provide the mechanism for transmitting texture information to the surface materials, provide cataloging and source control for textures and other data files, and the like.
  • a user can effectively paint "directly” onto a three dimensional object using any conventional two dimensional painting program.
  • the painting program relies on "layers.”
  • the user paints a number of two dimensional paintings (e.g. overlay images), on different views of the object. Typical views are cameras with orientation of "front,” “back,” and the like.
  • the user can re-pose the object model, in multiple configurations, if the model is too hard to paint fully in a single reference pose. Additionally, the user can paint a number of overlay images of views of the reposed object.
  • a typical workflow of embodiments of the present invention includes: loading an object model into the system and posing the object model in different configurations. For example to paint a table the user may have one pose that defines the table and another pose that "explodes" the model by translating the table legs away from the bottom of the table. Next, the workflow may include creating or defining one or more views to paint on the model in the different poses.
  • a rendering pass is performed on the object in the different poses and in the defined views.
  • the results of the rendering pass are typically bitmap images, views, of the rendered object and associated depth maps of the rendered surfaces.
  • the workflow may also include the user loading the rendered bitmaps into a two dimensional paint program and painting one or more passes representing color, displacement, or the like.
  • the system computes at render time the result of a planar projection (reverse map) of the each object in each pose to each views and stores the resulting 2D coordinates of every visible surface point.
  • the surface shader will use these stored 2D coordinates for evaluating surface parameters, such as 2D texture maps, for each pass.
  • the values returned by the pass computation are then used to produce different effects in the shader like coloring or displacing the surfaces affected by paint.
  • non-planar projections such as perspective projections are used.
  • the depth map is evaluated during the planar projection phase process to ensure that only the foremost surface relative to the projected view receives the paint. Additionally, the surface normals are taken into consideration during the projection process to avoid projecting paint onto surfaces that are perpendicular or facing away the projection view.
  • the surface shader finishes its computation.
  • the resulting rendered object model is typically posed in a different pose from the poses described above.
  • the rendered object model is typically rendered in context of a scene, and the rendered scene is stored in memory. At a later time, the rendered scene is typically retrieved from memory and displayed to a user, hi various embodiments, the memory may be a hard disk drive, RAM, DVD-ROM, CD-ROM, film media, print media, and the like.
  • embodiments of the present invention allow users to pose articulated three-dimensional object models in multiple configurations for receiving projected paint from multiple views. These embodiments increase the efficiency and effectiveness of applying surface parameters, such as multiple texture maps, colors, and the like, onto surfaces of complex deformable three dimensional object models.
  • Advantages of embodiments of the present invention include the capability to allow the user to paint any three dimensional object model from multiple viewpoints and multiple pose configurations of the object.
  • the concept of multiple pose configurations allows the user to paint in areas that may not be directly accessible unless the model is deformed or decomposed into smaller pieces.
  • Embodiments of the present invention introduce unique techniques for organizing the multiple view/poses and for applying the resulting texture maps back onto the object. More specifically, the embodiments selectively control which surfaces receive paint using the surface orientation (normals) and depth maps rendered from the projecting views.
  • a method for a computer system includes posing at least a portion of a three-dimensional object model in a first configuration, determining a first two-dimensional view of at least the portion of the three-dimensional object model while in the first configuration, posing the portion of the three-dimensional object model in a second configuration, and determining a second two- dimensional view of the portion of the three-dimensional object model while in the second configuration.
  • Various techniques also include associating a first two-dimensional image with the first two-dimensional view of at least the portion of the object model, and associating a second two-dimensional image with the second two-dimensional view of the portion of the object model.
  • the process may also include associating a first set of surface parameters with a surface of at least the portion of the three-dimensional object model that is visible in the first two-dimensional view in response to the first two-dimensional image and in response to the first configuration for at least the portion of the three-dimensional object model, and associating a second set of surface parameters with a surface of the portion of the three-dimensional object model that is visible in the second two-dimensional view in response to the second two-dimensional image and in response to the second configuration for the portion of the three-dimensional object model.
  • a computer program product for a computer system including a processor includes code that directs the processor to receive a first configuration for at least a portion of a three- dimensional object, code that directs the processor to determine a first two-dimensional image, wherein the first two-dimensional image exposes a surface of at least the portion of the three-dimensional object in the first configuration, code that directs the processor to receive a second configuration for at least the portion of the three-dimensional object, and code that directs the processor to determine a second two-dimensional image, wherein the second two-dimensional image exposes a surface of at least the portion of the three- dimensional object in the second configuration.
  • Additional computer code may include code that directs the processor to receive a first two-dimensional paint image, wherein the first two-dimensional paint image is associated with the first two-dimensional image, and code that directs the processor to receive a second two-dimensional paint image, wherein the second two-dimensional paint image is associated with the second two-dimensional image.
  • the code may also include code that directs the processor to determine a first group of parameters in response to the first two-dimensional paint image, wherein the first group of parameters is associated with the surface of at least the portion of the three-dimensional object in the first configuration, and code that directs the processor to determine a second group of parameters in response to the second two-dimensional paint image, wherein the second group of parameters is associated with the surface of at least the portion of the three- dimensional object in the second configuration.
  • the codes may include machine readable or human readable code on a tangible media. Typical media includes a magnetic disk, an optical disk, or the like.
  • a computer system typically includes a display, a memory, and a processor.
  • the memory is configured to store a model of a three-dimensional object, a first pose and a second pose for the three-dimensional object, a first two- dimensional image and a second two dimensional-image, and surface shading parameters associated with a surface of the three-dimensional object.
  • the processor is typically configured to output a first view of the three-dimensional object in the first pose to the display, configured to output a second view of the three-dimensional object in the second pose to the display, and configured to receive the first two-dimensional image and to receive the second two-dimensional image.
  • the processor may also be configured to determine a first set of surface parameters associated with surfaces of the three-dimensional object in response to the first view of the three-dimensional object and in response to the first two-dimensional image, and configured to determine a second set of surface parameters associated with additional surfaces of the three-dimensional object in response to the second view of the three-dimensional object and in response to the second two-dimensional image.
  • FIG. 1 illustrates a block diagram of a system according to one embodiment of the present invention
  • FIG. 2 illustrates a block diagram of an embodiment of the present invention
  • FIGs. 3A-B illustrates a flow process according to an embodiment of the present invention
  • FIG. 4 illustrates an example of an embodiment
  • FIGs. 5A-C illustrate one example of an embodiment of the present invention
  • FIGs. 6A-D illustrate another example of an embodiment of the present invention.
  • FIGs. 7A-C illustrate another example of an embodiment of the present invention.
  • Gprim geometric primitive
  • Bspline parametric function
  • Model Object Model: a collection of Gprims organized in an arbitrary number of faces (polygon meshes and subdivision surfaces), implicit surfaces or the like. The system does not require a 2D surface parameterization to perform its operations.
  • View an orthographic or perspective camera that can generate an image of the model from a specific viewpoint.
  • Pose the state of the model in terms of specific rigid transformations in its hierarchy and specific configuration of its Gprims. A pose also describes the state of one or more views.
  • a pose typically includes the position and orientation of both a model and all the view cameras.
  • a pose specifies a particular configuration or orientation of more than one object with in an object model.
  • a pose may specify that two objects are a particular distance from each other, or that two objects are a particular angle with respect to each other, or the like. Examples of different poses of objects will be illustrated below.
  • Pass type of painting, for example "color” or "displacement", along with the number of color channels that are to be used in the pass.
  • the name provides a handle with which to reference a set of paintings within the shader. Typically the names are arbitrary.
  • FIG. 1 is a block diagram of typical computer system 100 according to an embodiment of the present invention.
  • computer system 100 typically includes a monitor 110, computer 120, a keyboard 130, a user input device 140, a network interface 150, and the like.
  • user input device 140 is typically embodied as a computer mouse, a trackball, a track pad, wireless remote, drawing tablet, an integrated display and tablet (e.g. Cintiq by Wacom), voice command system, eye tracking system, and the like.
  • User input device 140 typically allows a user to select objects, icons, text and the like that appear on the monitor 110.
  • Embodiments of network interface 150 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, and the like.
  • Network interface 150 are typically coupled to a computer network as shown, hi other embodiments, network interface 150 may be physically integrated on the motherboard of computer 120, may be a software program, such as soft DSL, or the like.
  • Computer 120 typically includes familiar computer components such as a processor 160, and memory storage devices, such as a random access memory (RAM) 170, disk drives 180, and system bus 190 interconnecting the above components.
  • processor 160 and memory storage devices, such as a random access memory (RAM) 170, disk drives 180, and system bus 190 interconnecting the above components.
  • RAM random access memory
  • computer 120 is a PC compatible computer having one or more microprocessors such as PentiumlVTM or XeonTM microprocessors from Intel Corporation. Further, in the present embodiment, computer 120 typically includes a LINUX-based operating system.
  • RAM 170 and disk drive 180 are examples of tangible media for storage of data, audio / video files, computer programs, scene descriptor files, object data files, overlay images, depth maps, shader descriptors, a rendering engine, a shading engine, output image files, texture maps, displacement maps, painting environment, object creation environments, animation environments, surface shading environment, asset management systems, databases and database management systems, and the like.
  • Other types of tangible media include floppy disks, removable hard disks, optical storage media such as CD-ROMS, DVDs and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like.
  • computer system 100 may also include software that enables communications over a network such as the HTTP, TCP/IP, RTP/RTSP protocols, and the like, h alternative embodiments of the present invention, other communications software and transfer protocols may also be used, for example IPX, UDP or the like.
  • Fig. 1 is representative of computer rendering systems capable of embodying the present invention. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention.
  • the computer may be a desktop, portable, rack-mounted or tablet configuration.
  • other micro processors are contemplated, such as PentiumTM or ItaniumTM microprocessors; OpteronTM or Athlon-XPTM microprocessors from Advanced Micro Devices, Inc; PowerPC G4TM, G5TM microprocessors from Motorola, Inc.; and the like.
  • Windows® operating system such as WindowsXP®, WindowsNT®, or the like from Microsoft Corporation
  • Solaris from Sun Microsystems
  • LINUX Sun Microsystems
  • UNIX UNIX
  • MAC OS Apple Computer Corporation
  • Fig. 2 illustrates a block diagram of an embodiment of the present invention. Specifically, Fig. 2 illustrates an animation environment 200, an object creation environment 210, and a storage system 220.
  • object creation environment 210 is an environment that allows users (modellers) to specify object articulation models, including armatures and rigs. Within this environment, users can create models (manually, procedurally, etc.) of objects and specify how the objects articulate with respect to animation variables (Avars), hi one specific embodiment, object creation environment 210 is a Pixar proprietary object creation environment known as "Gepetto.” In other embodiements, other types of object creation environments can be used.
  • object creation environment 210 may also be used by users (shaders) to specify surface parameters of the object models.
  • an environment may be provided within object creation environment 210, or separately, that allows users to assign parameters to the surfaces of the object models via painting.
  • the surface parameters include color data, texture mapping data, displacement data, and the like. These surface parameters are typically used to render the object within a scene.
  • the environment allows the user to define poses for the object model. Additionally, it allows the user to render views of the object model in the different poses.
  • the environment also provide the mechanism to perform planar projections (with possible use of depth maps and surface normals) on "reference" poses, also known as Pref, while shading the and rendering the standard object configuration, and maintains association among the different views, different poses, different paint data, different surface parameter data, and the like, as will be described below.
  • the object models that are created with object creation environment 210 may also be used in animation environment 200.
  • object models are heirarchically built. The heirarchical nature for building-up object models is useful because different users (modellers) are typically assigned the tasks of creating the different models. For example, one modeller is assigned the task of creating a hand model 290, a different modeller is assigned the task of creating a lower arm model 280, and the like.
  • animation environment 200 is an environment that allows users (animators) to manipulate object articulation models, via the animation variables (Avars).
  • animation environment 200 is a Pixar proprietary animation enviroment known as "MenN," although in other embodiments, other animation environments could also be adapted.
  • animation environment 200 allows an animator to manipulate the Avars provided in the object models (generic rigs) and to move the objects with resspect to time, i.e. animate an object.
  • animation environment 200 and object cration environment 210 maybe combined into a single integrated environment.
  • storage system 220 may include any organized and repeatable way to access object articulation models.
  • storage system 220 includes a simple flat-directory structure on local drive or network drive; in other embodiments, storage system 220 may be an asset management system or a database access system tied to a database, or the like.
  • storage system 220 receives references to object models from animation environment 200 and object creation environment 210. In return, storage system 220 provides the object model stored therein.
  • Storage system 220 typically also stores the surface shading parameters, overlay images, depth maps, etc. discussed herein.
  • Pixar's object creation environment allowed a user to paint and project images (textures) from multiple views onto an object model in a specific configuration (pose).
  • the inventors of the present invention recognized the object creation environment did not support views of objects in different poses and that it was difficult to apply textures on complicated three dimensional models in a single pose.
  • Figs. 3A-B illustrates a flow process according to an embodiment of the present invention.
  • a three dimensional object model is provided, step 300.
  • one or more users specify a geometric representation of one or more objects via an object creation environment. Together, these objects are combined to form a model of a larger object, hi the present embodiment, the modeler may use an object creation environment such as Gepetto, or the like.
  • object creation environment such as Gepetto, or the like.
  • additional users specify how the surface of the objects should appear.
  • such users shadeers
  • specify any number of surface effects of the object such as base color, scratches, dirt, displacement maps, roughness and shininess maps, transparency and control over material type.
  • the user takes the three dimensional object model and specifies an initial pose, step 310.
  • this step need not be specifically performed, as objects have default poses.
  • an object model for a character such as an automobile may have a default pose with its doors closed.
  • the user typically specifies one or more view camera position, hi other embodiments, a number of default cameras may be used for each object.
  • commonly specified projection views include a top view, a left side view, a right side view, a bottom view, and the like. Additionally, the camera may be an oblique view, or the like.
  • Projection views may be planar, but may also be non-planar and projective. As examples, perspective projections are contemplated, where curved projectors map an image onto a curved surface.
  • the computer system renders a two dimensional view of the three-dimensional object in the first pose, step 320. More specifically, the system renders each view by using the object model, the pose, and view camera data.
  • the rendering pass may be a high quality rendering via a rendering program such as Pixar's Renderman product.
  • the rendering / shading process may be performed with a low quality rendering process, such as GL, and GPU hardware and software Tenderers.
  • each rendered view may be stored as individual view image files, or combined into a larger file.
  • a depth map is also generated, from which the planar projection function, described below, utilizes.
  • the system displays the one or more rendered views of the object, step 330.
  • this step occurs in a user environment that allows the user to graphically assign pixel values on a two dimensional image. Commonly, such environments are termed to include "paint" functionality.
  • the one or more views can be simultaneously displayed to the user.
  • a user assigns pixel values with the views of the object, step 340.
  • the use performs this action by graphically painting "on top" of the views of the object.
  • the painting is analogous to a child painting or coloring an image in a coloring book.
  • the user applies different brushes onto the views of the object, using an overlay layer or the like.
  • the use of mechanisms analogous to "layers" is contemplated herein.
  • the different brushes may have one or more gray scale values, one or more colors, and the like.
  • the user may use a fine black brush to draw a crack-type pattern in an overlay layer of the view.
  • the user may use a spray paint-type brush to darken selected portions in an overlay layer of the view.
  • the user may use a paint brush to color an overlay layer of the view.
  • other ways to specify an overlay layer image are also contemplated, such as the application of one or more gradients to the image, the application of manipulations limited to specific portions of the overlay layer image (e.g. selections), the inclusion of one or more images into an overlay layer image (e.g. a decal layer), and the like.
  • the overlay layer image for each view is then stored, step 350.
  • the overlay layer images are stored in separate and identifiable files from the two dimensional view.
  • the overlay layer image is stored in a layer of the two dimensional view, or the like.
  • the file including the overlay layer image is also associated with the pose defined in step 310, and depth map determined in step 320, step 360.
  • the user may decide to re-pose the three-dimensional object in a second pose, step 370.
  • the process described above is then repeated.
  • the process of re-posing the three-dimensional object, creating one or more views, and depth maps painting on top of the views, etc. can be repeated for as many poses the user deems necessary.
  • a character object one pose may be the character with an opened mouth and arms up, and another pose may be the character with a closed mouth and arms down.
  • one pose may be the folding table unfolded, and another pose may be the folding table with its legs "exploded" or separated from the table top.
  • a user may see views derived from different poses of the three-dimensional object on the screen at the same time. Accordingly, the process of viewing and painting described above need not be performed only based upon one pose of the object at a time. Additionally, the user may paint on top of views of the object from different poses in the same session. For example, for a character posed with its mouth open, the user may paint white on a layer on a view showing the character's mouth, then the user may paint black on a layer of a view showing the character's hair, then the user may repaint a different shade of white on the layer on the view showing the character's mouth, and the like.
  • the next step is to associate values painted on each view of the three-dimensional object in the first pose back to the object, step 380.
  • each view of the object is typically a projection of surfaces of the three- dimensional object in the first pose into two-dimensions. Accordingly, portions of that appear to be "painted upon" by the overlay image are projected back to the tliree-dimensional object using the associated depth map.
  • This functionality is enabled because the system maintains a linkage among the overlay image, the view, and the pose of the three-dimensional object, hi cases where there are multiple rendered views, the paint is projected back for each rendered view to the three-dimensional object in the first pose.
  • surface normals may be used to "feather" the effect of the projection onto surfaces of the three-dimensional object.
  • the paint effect may be calculated to be -100%; whereas for a surface that is at a 30 degree angle to the projection view, the paint effect may be calculated to be -50% (sin(30)); whereas for a surface that is at a 60 degree angle to the projection view, the paint effect may calculated to be -13% (sin(60)); and the like.
  • the amount of feathering may be adjusted by the user, hi other embodiments, feathering may be used to vary the paint effect at transition areas, such as the edges or borders of the object, and the like. In various embodiments, feathering of the projected paint, reduces smearing of the projected paint.
  • FIG. 4 illustrates an example of an embodiment.
  • a three- dimensional cylinder 500 appears as a rectangle 510 in a two-dimensional view 520 of cylinder 500.
  • a user paints an overlay image 530 on top of view 520.
  • the user paints the bottom half of cylinder 500 black.
  • the overlay image is projected to the three-dimensional cylinder, accordingly, the model of the front bottom surface 540 of cylinder 500 is associated with the property or color of black and feathered as the surface normal points away from viewing plane.
  • the back bottom surface 550 of cylinder 500 is not associated with the color black, as it was not exposed in view 520.
  • a back view 560 and a bottom view 570 of cylinder 500 could be specified to expose remaining bottom half surfaces of cylinder 500.
  • each view of the object is typically a projection of surfaces of the three-dimensional object in the second pose into two-dimensions. Accordingly, portions of that appear to be "painted upon" by the overlay image are projected to the three-dimensional object using the associated depth map. Again, in cases where there are multiple rendered views, the paint is projected back for each rendered view to the three-dimensional object in the second pose.
  • the planar projections from step 380 and step 390 are combined and both projected back to the surface of the three-dimensional object, step 400.
  • the users may paint upon the rendered views of the three-dimensional object that are in different poses, and have the paint data be projected back to a single three-dimensional object in a neutral pose.
  • the inventors of the present invention believe that the above functionality is significant as it allows the user to "paint" hard-to-reach portions of a three-dimensional object by allowing the user to repose the three-dimensional object, and allowing the user to paint upon the resulting rendered view.
  • one pose may be a character with their mouth closed, and another with the mouth open. Further, examples of the use of embodiments of the present invention will be illustrated below.
  • this step 400 can be performed before a formal rendering of the three-dimensional object, i other embodiments, step 400 occurs dynamically during a formal rendering process.
  • the data from steps 380 and 390 may be maintained in separate files.
  • the system dynamically combines the planar projection data from the three-dimensional object in the first pose with the planar projection data from the three-dimensional object in the second pose.
  • the combined planar projection data is then used to render the three-dimensional object in typically a third pose, step 410.
  • the first pose may be a character with both the arms down
  • the second pose may be the character with both the arms up
  • the third pose may be the character with only one arm up.
  • the paint data may specify any number of properties for the surface of the object. Such properties are also termed shading pass data. For a typical object surface, there may be more than one hundred shading passes.
  • the paint data may specify surface colors, application of texture maps, application of displacement maps, and the like.
  • the planar projections from steps 380 and 390 may apply the same properties or different properties to the surface of the object.
  • step 380 may be a surface "crack" pass
  • step 390 may be a surface color pass.
  • the object is rendered at the same time as other objects in a scene.
  • the rendered scene is typically another two-dimensional image that is then stored, step 420.
  • the rendered scene can be stored in optical form such as film, optical disk (e.g. CD-ROM, DND), or the like; magnetic form such as a hard disk, network drive, or the like, electronic form such as an electronic signal, a data packet, or the like.
  • optical form such as film, optical disk (e.g. CD-ROM, DND), or the like
  • magnetic form such as a hard disk, network drive, or the like
  • electronic form such as an electronic signal, a data packet, or the like.
  • the representation of the resulting rendered scene may later be retrieved and displayed to one or more viewers, step 430.
  • FIGs. 5A-C illustrate one example of an embodiment of the present invention. Specifically, Fig. 5 A illustrates a tliree-dimensional model of a box 600 in a closed pose. In Fig. 5B, a number of two-dimensional views of box 600 are illustrated, including a front view 610, a top view 620, and a side view 630.
  • a user creates overlay images 640-660 on top of views 610-630, respectively. As discussed above, the user typically paints on top of view 610-630 to create overlay images 640-660.
  • Fig. 5D illustrates a three-dimensional model of box 670 in the closed pose after overlay images 640-660 are projected back to the three-dimensional model of box 600 in the closed pose.
  • FIGs. 6A-D illustrate another example of an embodiment of the present invention. Specifically, Fig. 6A illustrates a tliree-dimensional model of a box 700 in a open pose. In Fig. 6B, two-dimensional views of box 700 are illustrated, including a top view 710, a first cross-section 720, and a second cross-section 730.
  • a user creates an overlay images 740-760 on top of views 710-730, respectively. Again, the user typically paints on top of the respective views to create overlay images.
  • Fig. 6D illustrates a three-dimensional model of box 770 in the open pose after overlay images 740-760 are projected back to the three-dimensional model of box 700 in the open pose.
  • the three-dimensional model of box 670 and of box 770 are then combined into a single three-dimensional model. Illustrated in Fig. 6E is a single three-dimensional model of a box 780 including the projected back data from Fig. 5C and 6C. As shown in Fig. 6E, the three-dimensional model may be posed in a pose different from the pose in Fig. 5 A or Fig. 6A.
  • FIGs. 7A-C illustrate another example of an embodiment of the present invention. More specifically, Fig. 7A illustrates a three-dimensional model of a stool in a default pose 800. In Fig. 7B, a number of views 810 of stool 800 in the default pose are illustrated. In this example, the user can paint upon views 810, as described above. Fig. 7C then illustrates the three-dimensional model of the stool in a second pose 820. As can be seen, the legs 830 of the stool are "exploded" or separated from the sitting surface. A number of views 840 are illustrated. In this example, it can be seen that with views 840, the user can more easily paint the bottom of the sitting surface 850 and the legs 860 of the stool.
  • the painting functions may be provided integral to an object creation environment, a separate shading environment, a third- party paint program (e.g. Photoshop, Maya, Softimage), and the like.
  • a third- party paint program e.g. Photoshop, Maya, Softimage
  • Some embodiments described above use planar projection techniques to form views of object and to project back the overlay layer to the three dimensional object.
  • Other embodiments may also use non- planar projection techniques, to form perspective views of an object, and to project back to the three dimensional object.
  • the process of painting in an overlay layer and performing a planar projection back to the three dimensional object may be done in real-time or near real time for multiple poses of the object.
  • the user may be presented with a first view of the object in the first pose, and a first of the object in a second pose.
  • the user paints in an overlay layer of the first view.
  • a planar projection process occurs that projects the paint back to the three dimensional object.
  • the system re-renders the first view of the object in the first pose and also the second view of the object in the second pose.
  • the process occurs very quickly, the user can see the effect of the specification of surface parameters on the object in one pose, in all other poses (views of other poses).
  • the above embodiments disclose a method for a computer system, a computer system capable of performing the disclosed methods. Additional embodiments include computer program products on tangible media including software code that allows the computer system to perform the disclosed methods, and the like.

Abstract

A method for a computer system includes posing a 3D model (500) in a first configuration, determining a first 2D (510) view of the 3D model in the first configuration, posing the 3D model in a second configuration (550), determining a second 2D (570) view of the 3D model in the second configuration, associating a first 2D image with the first 2D view of the model, associating a second 2D (570) image with the second 2D view of the model, associating a first set of surface parameters with a surface of the 3D model that is visible in the first 2D view in response to the first 2D image and the first configuration for the 3D model, and associating a second set of surface parameters with a surface of the 3D model that is visible in the second 2D view in response to the second 2D image and the second configuration for the 3D model.

Description

Improved Paint Projection Method and Apparatus BACKGROUND OF THE INVENTION
[0001] The present invention relates to computer animation. More specifically, the present invention relates to enhanced methods and apparatus for specifying surface properties of animation objects.
[0002] Throughout the years, movie makers have often tried to tell stories involving make- believe creatures, far away places, and fantastic things. To do so, they have often relied on animation techniques to bring the make-believe to "life." Two of the major paths in animation have traditionally included, drawing-based animation techniques and stop motion animation techniques.
[0003] Drawing-based animation techniques were refined in the twentieth century, by movie makers such as Walt Disney and used in movies such as "Snow White and the Seven Dwarves" and "Fantasia" (1940). This animation technique typically required artists to hand- draw (or paint) animated images onto a transparent media or eels. After painting, each eel would then be captured or recorded onto film as one or more frames in a movie.
[0004] Stop motion-based animation techniques typically required the construction of miniature sets, props, and characters. The filmmakers would construct the sets, add props, and position the miniature characters in a pose. After the animator was happy with how everything was arranged, one or more frames of film would be taken of that specific arrangement. Stop motion animation techniques were developed by movie makers such as Willis O'Brien for movies such as "King Kong" (1932). Subsequently, these techniques were refined by animators such as Ray Harryhausen for movies including "The Mighty Joe Young" (1948) and Clash Of The Titans (1981).
[0005] With the wide-spread availability of computers in the later part of the twentieth century, animators began to rely upon computers to assist in the animation process. This included using computers to facilitate drawing-based animation, for example, by painting images, by generating in-between images ("tweening"), and the like. This also included using computers to augment stop motion animation techniques. For example, physical models could be represented by virtual models in computer memory, and manipulated.
[0006] One of the pioneering companies in the computer aided animation (CAA) industry was Pixar, dba Pixar Animation Studios. Over the years, Pixar developed and offered both computing platforms specially designed for CAA, and Academy- Award® winning rendering software known as RenderMan®.
[0007] Over the years, Pixar has also developed software products and software environments for internal use allowing users (modelers) to easily define object rigs and allowing users (animators) to easily animate the object rigs. Based upon such real-world experience, the inventors of the present invention have determined that additional features could be provided to such products and environments to facilitate the object definition and animation process. One such feature includes methods and apparatus for facilitating the definition of surface properties of objects.
[0008] The inventors of the present invention have determined that improved methods for specifying surface parameters to an object are needed.
BRIEF SUMMARY OF THE INVENTION
[0009] The present invention relates to computer animation. More specifically, the present invention relates to methods and apparatus allowing a user to specify surface parameters of an object or portions of an object that are in different poses.
[0010] Embodiments of the present invention are used to help manage the process of the creating of three dimensional "paintings." Embodiments control the definition of multiple poses, manage the rendering of views, provide the mechanism for transmitting texture information to the surface materials, provide cataloging and source control for textures and other data files, and the like.
[0011] With embodiments of the present invention, a user can effectively paint "directly" onto a three dimensional object using any conventional two dimensional painting program. In one embodiment, the painting program relies on "layers." With embodiments, the user paints a number of two dimensional paintings (e.g. overlay images), on different views of the object. Typical views are cameras with orientation of "front," "back," and the like. With embodiments of the present invention, the user can re-pose the object model, in multiple configurations, if the model is too hard to paint fully in a single reference pose. Additionally, the user can paint a number of overlay images of views of the reposed object.
[0012] A typical workflow of embodiments of the present invention includes: loading an object model into the system and posing the object model in different configurations. For example to paint a table the user may have one pose that defines the table and another pose that "explodes" the model by translating the table legs away from the bottom of the table. Next, the workflow may include creating or defining one or more views to paint on the model in the different poses.
[0013] In various embodiments, a rendering pass is performed on the object in the different poses and in the defined views. The results of the rendering pass are typically bitmap images, views, of the rendered object and associated depth maps of the rendered surfaces. The workflow may also include the user loading the rendered bitmaps into a two dimensional paint program and painting one or more passes representing color, displacement, or the like.
[0014] Later, the system computes at render time the result of a planar projection (reverse map) of the each object in each pose to each views and stores the resulting 2D coordinates of every visible surface point. The surface shader will use these stored 2D coordinates for evaluating surface parameters, such as 2D texture maps, for each pass. The values returned by the pass computation are then used to produce different effects in the shader like coloring or displacing the surfaces affected by paint. In other embodiments, non-planar projections such as perspective projections are used.
[0015] hi embodiments, the depth map is evaluated during the planar projection phase process to ensure that only the foremost surface relative to the projected view receives the paint. Additionally, the surface normals are taken into consideration during the projection process to avoid projecting paint onto surfaces that are perpendicular or facing away the projection view.
[0016] In embodiments of the present invention, after all the projection passes are resolved for every view and every pose of the object model, the surface shader finishes its computation. The resulting rendered object model is typically posed in a different pose from the poses described above.
[0017] In various embodiments, the rendered object model is typically rendered in context of a scene, and the rendered scene is stored in memory. At a later time, the rendered scene is typically retrieved from memory and displayed to a user, hi various embodiments, the memory may be a hard disk drive, RAM, DVD-ROM, CD-ROM, film media, print media, and the like.
[0018] According to the above, embodiments of the present invention allow users to pose articulated three-dimensional object models in multiple configurations for receiving projected paint from multiple views. These embodiments increase the efficiency and effectiveness of applying surface parameters, such as multiple texture maps, colors, and the like, onto surfaces of complex deformable three dimensional object models.
[0019] Advantages of embodiments of the present invention include the capability to allow the user to paint any three dimensional object model from multiple viewpoints and multiple pose configurations of the object. The concept of multiple pose configurations allows the user to paint in areas that may not be directly accessible unless the model is deformed or decomposed into smaller pieces.
[0020] Embodiments of the present invention introduce unique techniques for organizing the multiple view/poses and for applying the resulting texture maps back onto the object. More specifically, the embodiments selectively control which surfaces receive paint using the surface orientation (normals) and depth maps rendered from the projecting views.
[0021] According to one aspect of the invention, a method for a computer system is described. One method includes posing at least a portion of a three-dimensional object model in a first configuration, determining a first two-dimensional view of at least the portion of the three-dimensional object model while in the first configuration, posing the portion of the three-dimensional object model in a second configuration, and determining a second two- dimensional view of the portion of the three-dimensional object model while in the second configuration. Various techniques also include associating a first two-dimensional image with the first two-dimensional view of at least the portion of the object model, and associating a second two-dimensional image with the second two-dimensional view of the portion of the object model. The process may also include associating a first set of surface parameters with a surface of at least the portion of the three-dimensional object model that is visible in the first two-dimensional view in response to the first two-dimensional image and in response to the first configuration for at least the portion of the three-dimensional object model, and associating a second set of surface parameters with a surface of the portion of the three-dimensional object model that is visible in the second two-dimensional view in response to the second two-dimensional image and in response to the second configuration for the portion of the three-dimensional object model.
[0022] According to another aspect of the invention, a computer program product for a computer system including a processor is described. The computer program product includes code that directs the processor to receive a first configuration for at least a portion of a three- dimensional object, code that directs the processor to determine a first two-dimensional image, wherein the first two-dimensional image exposes a surface of at least the portion of the three-dimensional object in the first configuration, code that directs the processor to receive a second configuration for at least the portion of the three-dimensional object, and code that directs the processor to determine a second two-dimensional image, wherein the second two-dimensional image exposes a surface of at least the portion of the three- dimensional object in the second configuration. Additional computer code may include code that directs the processor to receive a first two-dimensional paint image, wherein the first two-dimensional paint image is associated with the first two-dimensional image, and code that directs the processor to receive a second two-dimensional paint image, wherein the second two-dimensional paint image is associated with the second two-dimensional image. The code may also include code that directs the processor to determine a first group of parameters in response to the first two-dimensional paint image, wherein the first group of parameters is associated with the surface of at least the portion of the three-dimensional object in the first configuration, and code that directs the processor to determine a second group of parameters in response to the second two-dimensional paint image, wherein the second group of parameters is associated with the surface of at least the portion of the three- dimensional object in the second configuration. The codes may include machine readable or human readable code on a tangible media. Typical media includes a magnetic disk, an optical disk, or the like.
[0023] According to yet another aspect of the present invention, a computer system is described. The computer system typically includes a display, a memory, and a processor. In one computer system, the memory is configured to store a model of a three-dimensional object, a first pose and a second pose for the three-dimensional object, a first two- dimensional image and a second two dimensional-image, and surface shading parameters associated with a surface of the three-dimensional object. In the computer system, the processor is typically configured to output a first view of the three-dimensional object in the first pose to the display, configured to output a second view of the three-dimensional object in the second pose to the display, and configured to receive the first two-dimensional image and to receive the second two-dimensional image. The processor may also be configured to determine a first set of surface parameters associated with surfaces of the three-dimensional object in response to the first view of the three-dimensional object and in response to the first two-dimensional image, and configured to determine a second set of surface parameters associated with additional surfaces of the three-dimensional object in response to the second view of the three-dimensional object and in response to the second two-dimensional image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] In order to more fully understand the present invention, reference is made to the accompanying drawings. Understanding that these drawings are not to be considered limitations in the scope of the invention, the presently described embodiments and the presently understood best mode of the invention are described with additional detail through use of the accompanying drawings in which:
[0025] Fig. 1 illustrates a block diagram of a system according to one embodiment of the present invention;
[0026] Fig. 2 illustrates a block diagram of an embodiment of the present invention;
[0027] Figs. 3A-B illustrates a flow process according to an embodiment of the present invention;
[0028] Fig. 4 illustrates an example of an embodiment;
[0029] Figs. 5A-C illustrate one example of an embodiment of the present invention;
[0030] Figs. 6A-D illustrate another example of an embodiment of the present invention; and
[0031] Figs. 7A-C illustrate another example of an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION [0032] In the following patent disclosure, the following terms are used:
[0033] Gprim (geometric primitive): a single three dimensional surface defined by a parametric function (Bspline), by a collection of three dimensional points organized in an arbitrary number of faces (polygons), or the like. [0034] Model (Object Model): a collection of Gprims organized in an arbitrary number of faces (polygon meshes and subdivision surfaces), implicit surfaces or the like. The system does not require a 2D surface parameterization to perform its operations.
[0035] View: an orthographic or perspective camera that can generate an image of the model from a specific viewpoint.
[0036] Pose: the state of the model in terms of specific rigid transformations in its hierarchy and specific configuration of its Gprims. A pose also describes the state of one or more views.
[0037] A pose typically includes the position and orientation of both a model and all the view cameras. In embodiments of the present invention, a pose specifies a particular configuration or orientation of more than one object with in an object model. For example, a pose may specify that two objects are a particular distance from each other, or that two objects are a particular angle with respect to each other, or the like. Examples of different poses of objects will be illustrated below.
[0038] Whenever the character is positioned, its position is typically saved as a named pose so it can be referenced later by the system and the user. A user saves a new pose after repositioning the model and establishing the new camera views. A painting (overlay image) that is created in a particular view is tied intimately to that camera's position and orientation.
[0039] Pass: type of painting, for example "color" or "displacement", along with the number of color channels that are to be used in the pass. The name provides a handle with which to reference a set of paintings within the shader. Typically the names are arbitrary.
[0040] Fig. 1 is a block diagram of typical computer system 100 according to an embodiment of the present invention.
[0041] In the present embodiment, computer system 100 typically includes a monitor 110, computer 120, a keyboard 130, a user input device 140, a network interface 150, and the like.
[0042] In the present embodiment, user input device 140 is typically embodied as a computer mouse, a trackball, a track pad, wireless remote, drawing tablet, an integrated display and tablet (e.g. Cintiq by Wacom), voice command system, eye tracking system, and the like. User input device 140 typically allows a user to select objects, icons, text and the like that appear on the monitor 110.
[0043] Embodiments of network interface 150 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, and the like. Network interface 150 are typically coupled to a computer network as shown, hi other embodiments, network interface 150 may be physically integrated on the motherboard of computer 120, may be a software program, such as soft DSL, or the like.
[0044] Computer 120 typically includes familiar computer components such as a processor 160, and memory storage devices, such as a random access memory (RAM) 170, disk drives 180, and system bus 190 interconnecting the above components.
[0045] In one embodiment, computer 120 is a PC compatible computer having one or more microprocessors such as PentiumlV™ or Xeon™ microprocessors from Intel Corporation. Further, in the present embodiment, computer 120 typically includes a LINUX-based operating system.
[0046] RAM 170 and disk drive 180 are examples of tangible media for storage of data, audio / video files, computer programs, scene descriptor files, object data files, overlay images, depth maps, shader descriptors, a rendering engine, a shading engine, output image files, texture maps, displacement maps, painting environment, object creation environments, animation environments, surface shading environment, asset management systems, databases and database management systems, and the like. Other types of tangible media include floppy disks, removable hard disks, optical storage media such as CD-ROMS, DVDs and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like.
[0047] In the present embodiment, computer system 100 may also include software that enables communications over a network such as the HTTP, TCP/IP, RTP/RTSP protocols, and the like, h alternative embodiments of the present invention, other communications software and transfer protocols may also be used, for example IPX, UDP or the like.
[0048] Fig. 1 is representative of computer rendering systems capable of embodying the present invention. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention. For example, the computer may be a desktop, portable, rack-mounted or tablet configuration. Further, the use of other micro processors are contemplated, such as Pentium™ or Itanium™ microprocessors; Opteron™ or Athlon-XP™ microprocessors from Advanced Micro Devices, Inc; PowerPC G4™, G5™ microprocessors from Motorola, Inc.; and the like. Further, other types of operating systems are contemplated, such as Windows® operating system such as WindowsXP®, WindowsNT®, or the like from Microsoft Corporation, Solaris from Sun Microsystems, LINUX, UNIX, MAC OS from Apple Computer Corporation, and the like.
[0049] Fig. 2 illustrates a block diagram of an embodiment of the present invention. Specifically, Fig. 2 illustrates an animation environment 200, an object creation environment 210, and a storage system 220.
[0050] hi the present embodiment, object creation environment 210 is an environment that allows users (modellers) to specify object articulation models, including armatures and rigs. Within this environment, users can create models (manually, procedurally, etc.) of objects and specify how the objects articulate with respect to animation variables (Avars), hi one specific embodiment, object creation environment 210 is a Pixar proprietary object creation environment known as "Gepetto." In other embodiements, other types of object creation environments can be used.
[0051] In the present embodiment, object creation environment 210 may also be used by users (shaders) to specify surface parameters of the object models. As will be described below, an environment may be provided within object creation environment 210, or separately, that allows users to assign parameters to the surfaces of the object models via painting. In various embodiments, the surface parameters include color data, texture mapping data, displacement data, and the like. These surface parameters are typically used to render the object within a scene.
[0052] hi embodiments of the present invention, the environment allows the user to define poses for the object model. Additionally, it allows the user to render views of the object model in the different poses. The environment also provide the mechanism to perform planar projections (with possible use of depth maps and surface normals) on "reference" poses, also known as Pref, while shading the and rendering the standard object configuration, and maintains association among the different views, different poses, different paint data, different surface parameter data, and the like, as will be described below.
[0053] In the present embodiment, the object models that are created with object creation environment 210 may also be used in animation environment 200. Typically, object models are heirarchically built. The heirarchical nature for building-up object models is useful because different users (modellers) are typically assigned the tasks of creating the different models. For example, one modeller is assigned the task of creating a hand model 290, a different modeller is assigned the task of creating a lower arm model 280, and the like.
[0054] In the present embodiment, animation environment 200 is an environment that allows users (animators) to manipulate object articulation models, via the animation variables (Avars). In one embodiment, animation environment 200 is a Pixar proprietary animation enviroment known as "MenN," although in other embodiments, other animation environments could also be adapted. In this embodiment, animation environment 200 allows an animator to manipulate the Avars provided in the object models (generic rigs) and to move the objects with resspect to time, i.e. animate an object.
[0055] In other embodiments of the present invention, animation environment 200 and object cration environment 210 maybe combined into a single integrated environment.
[0056] In Fig. 2, storage system 220 may include any organized and repeatable way to access object articulation models. For example, in one embodiment, storage system 220 includes a simple flat-directory structure on local drive or network drive; in other embodiments, storage system 220 may be an asset management system or a database access system tied to a database, or the like. In one embodiment, storage system 220 receives references to object models from animation environment 200 and object creation environment 210. In return, storage system 220 provides the object model stored therein. Storage system 220 typically also stores the surface shading parameters, overlay images, depth maps, etc. discussed herein.
[0057] Previously, Pixar's object creation environment allowed a user to paint and project images (textures) from multiple views onto an object model in a specific configuration (pose). However, the inventors of the present invention recognized the object creation environment did not support views of objects in different poses and that it was difficult to apply textures on complicated three dimensional models in a single pose.
[0058] Figs. 3A-B illustrates a flow process according to an embodiment of the present invention. Initially, a three dimensional object model is provided, step 300. Typically, one or more users (object modelers) specify a geometric representation of one or more objects via an object creation environment. Together, these objects are combined to form a model of a larger object, hi the present embodiment, the modeler may use an object creation environment such as Gepetto, or the like. [0059] Next, in the present embodiment, additional users specify how the surface of the objects should appear. For example, such users (shaders) specify any number of surface effects of the object such as base color, scratches, dirt, displacement maps, roughness and shininess maps, transparency and control over material type. To do so, the user takes the three dimensional object model and specifies an initial pose, step 310. In other embodiments, this step need not be specifically performed, as objects have default poses. For example, an object model for a character such as an automobile may have a default pose with its doors closed. At the same time, the user typically specifies one or more view camera position, hi other embodiments, a number of default cameras may be used for each object. For example, in various embodiments, commonly specified projection views include a top view, a left side view, a right side view, a bottom view, and the like. Additionally, the camera may be an oblique view, or the like. Projection views may be planar, but may also be non-planar and projective. As examples, perspective projections are contemplated, where curved projectors map an image onto a curved surface.
[0060] Next, in the present embodiment, the computer system renders a two dimensional view of the three-dimensional object in the first pose, step 320. More specifically, the system renders each view by using the object model, the pose, and view camera data. In various embodiments, the rendering pass may be a high quality rendering via a rendering program such as Pixar's Renderman product. In other embodiments, the rendering / shading process may be performed with a low quality rendering process, such as GL, and GPU hardware and software Tenderers.
[0061] In various embodiments, each rendered view may be stored as individual view image files, or combined into a larger file. Along with each rendered view, a depth map is also generated, from which the planar projection function, described below, utilizes.
[0062] In the present embodiment, the system displays the one or more rendered views of the object, step 330. In embodiments of the present invention, this step occurs in a user environment that allows the user to graphically assign pixel values on a two dimensional image. Commonly, such environments are termed to include "paint" functionality. In one embodiment, the one or more views can be simultaneously displayed to the user.
[0063] Next, in embodiments, a user assigns pixel values with the views of the object, step 340. In one embodiment, the use performs this action by graphically painting "on top" of the views of the object. The painting is analogous to a child painting or coloring an image in a coloring book. For example, the user applies different brushes onto the views of the object, using an overlay layer or the like. The use of mechanisms analogous to "layers" is contemplated herein. In the present embodiments, the different brushes may have one or more gray scale values, one or more colors, and the like.
[0064] As an example, the user may use a fine black brush to draw a crack-type pattern in an overlay layer of the view. In another example, the user may use a spray paint-type brush to darken selected portions in an overlay layer of the view. In yet another example, the user may use a paint brush to color an overlay layer of the view. In still other embodiments, other ways to specify an overlay layer image are also contemplated, such as the application of one or more gradients to the image, the application of manipulations limited to specific portions of the overlay layer image (e.g. selections), the inclusion of one or more images into an overlay layer image (e.g. a decal layer), and the like.
[0065] hi the present embodiment, the overlay layer image for each view is then stored, step 350. hi various examples, the overlay layer images are stored in separate and identifiable files from the two dimensional view. In other embodiments, the overlay layer image is stored in a layer of the two dimensional view, or the like. In various embodiments, the file including the overlay layer image is also associated with the pose defined in step 310, and depth map determined in step 320, step 360.
[0066] In the present embodiment, the user may decide to re-pose the three-dimensional object in a second pose, step 370. The process described above is then repeated. In various embodiments, the process of re-posing the three-dimensional object, creating one or more views, and depth maps painting on top of the views, etc. can be repeated for as many poses the user deems necessary. As an example, for a character object, one pose may be the character with an opened mouth and arms up, and another pose may be the character with a closed mouth and arms down. As another example, for a folding table, one pose may be the folding table unfolded, and another pose may be the folding table with its legs "exploded" or separated from the table top.
[0067] In some embodiments of the present invention, a user may see views derived from different poses of the three-dimensional object on the screen at the same time. Accordingly, the process of viewing and painting described above need not be performed only based upon one pose of the object at a time. Additionally, the user may paint on top of views of the object from different poses in the same session. For example, for a character posed with its mouth open, the user may paint white on a layer on a view showing the character's mouth, then the user may paint black on a layer of a view showing the character's hair, then the user may repaint a different shade of white on the layer on the view showing the character's mouth, and the like.
[0068] hi the present embodiment, the next step is to associate values painted on each view of the three-dimensional object in the first pose back to the object, step 380. More specifically, each view of the object is typically a projection of surfaces of the three- dimensional object in the first pose into two-dimensions. Accordingly, portions of that appear to be "painted upon" by the overlay image are projected back to the tliree-dimensional object using the associated depth map. This functionality is enabled because the system maintains a linkage among the overlay image, the view, and the pose of the three-dimensional object, hi cases where there are multiple rendered views, the paint is projected back for each rendered view to the three-dimensional object in the first pose.
[0069] In embodiments of the present invention, surface normals may be used to "feather" the effect of the projection onto surfaces of the three-dimensional object. For example, for a surface parallel to the projection view, the paint effect may be calculated to be -100%; whereas for a surface that is at a 30 degree angle to the projection view, the paint effect may be calculated to be -50% (sin(30)); whereas for a surface that is at a 60 degree angle to the projection view, the paint effect may calculated to be -13% (sin(60)); and the like. The amount of feathering may be adjusted by the user, hi other embodiments, feathering may be used to vary the paint effect at transition areas, such as the edges or borders of the object, and the like. In various embodiments, feathering of the projected paint, reduces smearing of the projected paint.
[0070] Fig. 4 illustrates an example of an embodiment. In this example, a three- dimensional cylinder 500 appears as a rectangle 510 in a two-dimensional view 520 of cylinder 500. According to the above process, a user paints an overlay image 530 on top of view 520. hi this example, the user paints the bottom half of cylinder 500 black.
[0071] Next, as illustrated in Fig. 4, the overlay image is projected to the three-dimensional cylinder, accordingly, the model of the front bottom surface 540 of cylinder 500 is associated with the property or color of black and feathered as the surface normal points away from viewing plane. The back bottom surface 550 of cylinder 500 is not associated with the color black, as it was not exposed in view 520.
[0072] In the present example, a back view 560 and a bottom view 570 of cylinder 500 could be specified to expose remaining bottom half surfaces of cylinder 500.
[0073] Returning to Fig. 3, the next step is to associate values painted on each view of the three-dimensional object in the second pose back to the object, step 390. Similar to the above, each view of the object is typically a projection of surfaces of the three-dimensional object in the second pose into two-dimensions. Accordingly, portions of that appear to be "painted upon" by the overlay image are projected to the three-dimensional object using the associated depth map. Again, in cases where there are multiple rendered views, the paint is projected back for each rendered view to the three-dimensional object in the second pose.
[0074] h the present embodiment, the planar projections from step 380 and step 390 are combined and both projected back to the surface of the three-dimensional object, step 400. In other words, the users may paint upon the rendered views of the three-dimensional object that are in different poses, and have the paint data be projected back to a single three-dimensional object in a neutral pose.
[0075] The inventors of the present invention believe that the above functionality is significant as it allows the user to "paint" hard-to-reach portions of a three-dimensional object by allowing the user to repose the three-dimensional object, and allowing the user to paint upon the resulting rendered view. As an example, one pose may be a character with their mouth closed, and another with the mouth open. Further, examples of the use of embodiments of the present invention will be illustrated below.
[0076] In embodiments of the present invention, this step 400 can be performed before a formal rendering of the three-dimensional object, i other embodiments, step 400 occurs dynamically during a formal rendering process. For example, the data from steps 380 and 390 may be maintained in separate files. Then, when the object is to be rendered in high- quality (e.g. with Pixar's Renderman), the system dynamically combines the planar projection data from the three-dimensional object in the first pose with the planar projection data from the three-dimensional object in the second pose. [0077] The combined planar projection data is then used to render the three-dimensional object in typically a third pose, step 410. As an example, the first pose may be a character with both the arms down, the second pose may be the character with both the arms up, and the third pose may be the character with only one arm up.
[0078] In embodiments of the present invention, the paint data may specify any number of properties for the surface of the object. Such properties are also termed shading pass data. For a typical object surface, there may be more than one hundred shading passes. For example, the paint data may specify surface colors, application of texture maps, application of displacement maps, and the like. In embodiments of the present invention, the planar projections from steps 380 and 390 may apply the same properties or different properties to the surface of the object. For example, step 380 may be a surface "crack" pass, and step 390 may be a surface color pass.
[0079] hi various embodiments, the object is rendered at the same time as other objects in a scene. The rendered scene is typically another two-dimensional image that is then stored, step 420. In embodiments of the present invention, the rendered scene can be stored in optical form such as film, optical disk (e.g. CD-ROM, DND), or the like; magnetic form such as a hard disk, network drive, or the like, electronic form such as an electronic signal, a data packet, or the like. The representation of the resulting rendered scene may later be retrieved and displayed to one or more viewers, step 430.
[0080] Figs. 5A-C illustrate one example of an embodiment of the present invention. Specifically, Fig. 5 A illustrates a tliree-dimensional model of a box 600 in a closed pose. In Fig. 5B, a number of two-dimensional views of box 600 are illustrated, including a front view 610, a top view 620, and a side view 630.
[0081] In Fig. 5C, a user creates overlay images 640-660 on top of views 610-630, respectively. As discussed above, the user typically paints on top of view 610-630 to create overlay images 640-660. Fig. 5D illustrates a three-dimensional model of box 670 in the closed pose after overlay images 640-660 are projected back to the three-dimensional model of box 600 in the closed pose.
[0082] Figs. 6A-D illustrate another example of an embodiment of the present invention. Specifically, Fig. 6A illustrates a tliree-dimensional model of a box 700 in a open pose. In Fig. 6B, two-dimensional views of box 700 are illustrated, including a top view 710, a first cross-section 720, and a second cross-section 730.
[0083] In Fig. 6C, a user creates an overlay images 740-760 on top of views 710-730, respectively. Again, the user typically paints on top of the respective views to create overlay images. Fig. 6D illustrates a three-dimensional model of box 770 in the open pose after overlay images 740-760 are projected back to the three-dimensional model of box 700 in the open pose.
[0084] In the present embodiment, the three-dimensional model of box 670 and of box 770 are then combined into a single three-dimensional model. Illustrated in Fig. 6E is a single three-dimensional model of a box 780 including the projected back data from Fig. 5C and 6C. As shown in Fig. 6E, the three-dimensional model may be posed in a pose different from the pose in Fig. 5 A or Fig. 6A.
[0085] Figs. 7A-C illustrate another example of an embodiment of the present invention. More specifically, Fig. 7A illustrates a three-dimensional model of a stool in a default pose 800. In Fig. 7B, a number of views 810 of stool 800 in the default pose are illustrated. In this example, the user can paint upon views 810, as described above. Fig. 7C then illustrates the three-dimensional model of the stool in a second pose 820. As can be seen, the legs 830 of the stool are "exploded" or separated from the sitting surface. A number of views 840 are illustrated. In this example, it can be seen that with views 840, the user can more easily paint the bottom of the sitting surface 850 and the legs 860 of the stool.
[0086] Many changes or modifications are readily envisioned, hi light of the above disclosure, one of ordinary skill in the art would recognize that the concepts described above may be applied to any number of environments. For example, the painting functions may be provided integral to an object creation environment, a separate shading environment, a third- party paint program (e.g. Photoshop, Maya, Softimage), and the like. Some embodiments described above use planar projection techniques to form views of object and to project back the overlay layer to the three dimensional object. Other embodiments may also use non- planar projection techniques, to form perspective views of an object, and to project back to the three dimensional object.
[0087] In other embodiments of the present invention, the process of painting in an overlay layer and performing a planar projection back to the three dimensional object may be done in real-time or near real time for multiple poses of the object. For example, the user may be presented with a first view of the object in the first pose, and a first of the object in a second pose. Next, the user paints in an overlay layer of the first view. In this embodiment, as the user paints, a planar projection process occurs that projects the paint back to the three dimensional object. Then in real time or near-real time, the system re-renders the first view of the object in the first pose and also the second view of the object in the second pose. In such embodiments, because the process occurs very quickly, the user can see the effect of the specification of surface parameters on the object in one pose, in all other poses (views of other poses).
[0088] hi embodiments of the present invention, various methods for painting upon the surface are contemplated, such as with brushes, textures, gradients, filters, and the like. Further, various methods for storing the painted images (e.g. layers) are also contemplated.
[0089] The above embodiments disclose a method for a computer system, a computer system capable of performing the disclosed methods. Additional embodiments include computer program products on tangible media including software code that allows the computer system to perform the disclosed methods, and the like.
[0090] Further embodiments can be envisioned to one of ordinary skill in the art after reading this disclosure. In other embodiments, combinations or sub-combinations of the above disclosed invention can be advantageously made. The block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention.
[0091] The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.

Claims

WHAT IS CLAIMED IS: 1. A method for a computer system includes: posing at least a portion of a three-dimensional object model in a first configuration; determining a first two-dimensional view of at least the portion of the three- dimensional object model while in the first configuration; posing the portion of the three-dimensional object model in a second configuration; determining a second two-dimensional view of the portion of the three- dimensional object model while in the second configuration; associating a first two-dimensional image with the first two-dimensional view of at least the portion of the object model; associating a second two-dimensional image with the second two-dimensional view of the portion of the object model; associating a first set of surface parameters with a surface of at least the portion of the three-dimensional object model that is visible in the first two-dimensional view in response to the first two-dimensional image and in response to the first configuration for at least the portion of the tliree-dimensional object model; and associating a second set of surface parameters with a surface of the portion of the three-dimensional object model that is visible in the second two-dimensional view in response to the second two-dimensional image and in response to the second configuration for the portion of the three-dimensional object model. 2. The method of claim 1 wherein the first two-dimensional view of the portion of the three-dimensional object model is selected from the group: front view, side view, top view, bottom view. 3. The method of any of claims 1 -2 wherein the portion of the three-dimensional object model includes a first object and a second object; wherein the first configuration for at least the portion of the three-dimensional object model comprises the first object and the second object having a first relationship; wherein the first configuration for the portion of the three-dimensional object model comprises the first object and the second object having a second relationship; and wherein the first relationship and the second relationship are different. 4. The method of claim 3 wherein the first relationship and the second relationship are selected from the group: linear relationship, angular relationship. 5. The method of any of claims 1 -4 further comprising: displaying the first two-dimensional view of at least the portion of the object model on a display; and creating the first two-dimensional image by painting on top of the first two- dimensional view of at least the portion of the object model on the display. 6. The method of any of claims 1 -5 wherein the first set of surface parameters is selected from the group including: surface color, surface appearance, displacement maps, texture maps. 7. The method of any of claims 1 -6 further comprising: rendering the portion of the three-dimensional object model in response to the first set of surface parameters and the second set of surface parameters to form a rendered object; and storing a representation of the rendered object in a tangible media. 8. The tangible media including the representation of the rendered obj ect formed according to the method described in any of claims 1-7. 9. A computer program product for a computer system including a processor comprises: code that directs the processor to receive a first configuration for at least a portion of a three-dimensional object; code that directs the processor to determine a first two-dimensional image, wherein the first two-dimensional image exposes a surface of at least the portion of the three- dimensional object in the first configuration; code that directs the processor to receive a second configuration for at least the portion of the three-dimensional object; code that directs the processor to determine a second two-dimensional image, wherein the second two-dimensional image exposes a surface of at least the portion of the three-dimensional object in the second configuration; code that directs the processor to receive a first two-dimensional paint image, wherein the first two-dimensional paint image is associated with the first two-dimensional image; code that directs the processor to receive a second two-dimensional paint image, wherein the second two-dimensional paint image is associated with the second two- dimensional image; code that directs the processor to determine a first group of parameters in response to the first two-dimensional paint image, wherein the first group of parameters is associated with the surface of at least the portion of the three-dimensional object in the first configuration; and code that directs the processor to determine a second group of parameters in response to the second two-dimensional paint image, wherein the second group of parameters is associated with the surface of at least the portion of the three-dimensional object in the second configuration; and wherein the codes reside on a tangible media. 10. The computer program product of claim 9 wherein the first two- dimensional image comprises a image of at least the portion of the three-dimensional object in the first configuration in a view selected from the group: front view, side view, top view, bottom view, oblique view. 11. The computer program product of any of claims 9-10 further comprising: code that directs the processor to combine at least a portion of the first group of parameters and at least a portion of the second group of parameters. 12. The computer program product of any of claims 9-11 wherein code that directs the processor to determine a third two-dimensional image, wherein the third two-dimensional image exposes additional surface of at least the portion of the three-dimensional object in the first configuration; code that directs the processor to receive a third two-dimensional paint image; wherein the third two-dimensional paint image is associated with the third two-dimensional image; and code that directs the processor to determine a third group of parameters in response to the third two-dimensional paint image, wherein the third group of parameters is associated with the additional surface of at least the portion of the three-dimensional object in the first configuration; wherein the surface of at least the portion of the three-dimensional object in the first configuration and the additional surface of at least the portion of the three- dimensional object in the first configuration overlap. 13. The computer program product of any of claims 9-12 wherein the portion of the three-dimensional object includes a first object and a second object; wherein the first configuration comprises the first object and the second object being oriented at a first angle; wherein the second configuration comprises the first object and the second object being oriented at a second angle; and wherein the first angle and the second angle are different. 14. The computer program product of any of claims 9-13 wherein the first set of parameters and the second set of parameters are selected without replacement from the group including: surface color, surface appearance, displacement map, texture map. 15. The computer program product of any of claims 9-14 further comprising: code that directs the processor to determine a fourth two-dimensional image, wherein the fourth two-dimensional image exposes additional surface of at least the portion of the three-dimensional object in the second configuration; code that directs the processor to receive a fourth two-dimensional paint image; wherein the fourth two-dimensional paint image is associated with the fourth two- dimensional image; code that directs the processor to determine a fourth group of parameters in response to the fourth two-dimensional paint image, wherein the fourth group of parameters is associated with the additional surface of at least the portion of the three-dimensional object in the fourth configuration; and code that directs the processor to combine the first group of parameters and the fourth group of parameters. 16. A computer system comprises : a display; a memory configured to store a model of a three-dimensional object, wherein the memory is configured to store a first pose and a second pose for the three-dimensional object, wherein the memory is also configured to store a first two-dimensional image and a second two dimensional-image, and wherein the memory is configure to store surface shading parameters associated with a surface of the three-dimensional object; and a processor coupled to the memory and to the display, wherein the processor is configured to output a first view of the three-dimensional object in the first pose to the display, wherein the processor is configured to output a second view of the three-dimensional object in the second pose to the display, wherein the processor is configured to receive the first two-dimensional image and to receive the second two-dimensional image, wherein the processor is configured to determine a first set of surface parameters associated with surfaces of the three-dimensional object in response to the first view of the tliree-dimensional object and in response to the first two-dimensional image, and wherein the processor is configured to determine a second set of surface parameters associated with additional surfaces of the three-dimensional object in response to the second view of the three-dimensional object and in response to the second two-dimensional image. 17. The computer system of claim 16 wherein the first view of the three- dimensional object is selected from the group: front view, side view, top view, bottom view, oblique view. 18. The computer system of any of claims 16-17 wherein the three-dimensional object includes a first object and a second object; wherein the first pose comprises the first object positioned in a first orientation relative to the second object; wherein the second pose comprises the first object positioned in a second orientation relative to the second object; and wherein the first orientation and the second orientation are selected from the group: distance, angle. 19. The computer system of any of claims 16-18 wherein the additional surfaces of the three-dimensional object includes other surfaces of the three-dimensional object not within the surfaces of the three-dimensional object. 20. The computer system of any of claims 16-19 wherein the first set of surface parameters are selected from the group including: surface color, surface appearance, displacement map, texture map. 21. The computer system of any of claims 16-20 wherein the processor is also configured to shade the three-dimensional object in response to the first set of surface parameters and the second set of surface parameters to form a shaded representation of the three-dimensional object; and wherein the memory is also configured to store the shaded representation of the three-dimensional object. 22. The computer system of any of claims 16-21 wherein the first view of the three-dimensional object in the first pose comprises a front view; and wherein the second view of the three-dimensional object in the second pose also comprises a front view.
EP04801835A 2003-07-29 2004-03-23 Improved paint projection method and apparatus Withdrawn EP1661116A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US49116003P 2003-07-29 2003-07-29
PCT/US2004/008993 WO2005017871A1 (en) 2003-07-29 2004-03-23 Improved paint projection method and apparatus

Publications (2)

Publication Number Publication Date
EP1661116A1 true EP1661116A1 (en) 2006-05-31
EP1661116A4 EP1661116A4 (en) 2010-12-01

Family

ID=34193093

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04801835A Withdrawn EP1661116A4 (en) 2003-07-29 2004-03-23 Improved paint projection method and apparatus

Country Status (6)

Country Link
EP (1) EP1661116A4 (en)
JP (1) JP2007500395A (en)
CN (1) CN1833271B (en)
CA (1) CA2533451A1 (en)
NZ (1) NZ544937A (en)
WO (1) WO2005017871A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080284798A1 (en) * 2007-05-07 2008-11-20 Qualcomm Incorporated Post-render graphics overlays
US8334872B2 (en) * 2009-05-29 2012-12-18 Two Pic Mc Llc Inverse kinematics for motion-capture characters
US8922558B2 (en) * 2009-09-25 2014-12-30 Landmark Graphics Corporation Drawing graphical objects in a 3D subsurface environment
US9095772B2 (en) * 2012-02-07 2015-08-04 Empire Technology Development Llc Online gaming
CN111145358B (en) * 2018-11-02 2024-02-23 北京微播视界科技有限公司 Image processing method, device and hardware device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729704A (en) * 1993-07-21 1998-03-17 Xerox Corporation User-directed method for operating on an object-based model data structure through a second contextual image
US6268865B1 (en) * 1998-01-13 2001-07-31 Disney Enterprises, Inc. Method and apparatus for three-dimensional painting
US20030048277A1 (en) * 2001-07-19 2003-03-13 Jerome Maillot Dynamically adjusted brush for direct paint systems on parameterized multi-dimensional surfaces

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0832471A4 (en) * 1995-04-25 2000-05-10 Cognitens Ltd Apparatus and method for recreating and manipulating a 3d object based on a 2d projection thereof
US6052669A (en) * 1997-06-06 2000-04-18 Haworth, Inc. Graphical user interface supporting method and system for remote order generation of furniture products
CN1161714C (en) * 1999-08-04 2004-08-11 凌阳科技股份有限公司 3D image processor using parallel scan lines as processing unit and its drawing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729704A (en) * 1993-07-21 1998-03-17 Xerox Corporation User-directed method for operating on an object-based model data structure through a second contextual image
US6268865B1 (en) * 1998-01-13 2001-07-31 Disney Enterprises, Inc. Method and apparatus for three-dimensional painting
US20030048277A1 (en) * 2001-07-19 2003-03-13 Jerome Maillot Dynamically adjusted brush for direct paint systems on parameterized multi-dimensional surfaces

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HANRAHAN P ET AL: "Direct WYSIWYG painting and texturing on 3D shapes" COMPUTER GRAPHICS, vol. 4, no. 24, 1990, pages 215-223 ORD, XP002076094 ISSN: 0097-8930 *
PEDERSEN H K ED - ASSOCIATION FOR COMPUTING MACHINERY: "A FRAMEWORK FOR INTERACTIVE TEXTURING ON CURVED SURFACES" COMPUTER GRAPHICS PROCEEDINGS 1996 (SIGGRAPH). NEW ORLEANS, AUG. 4 - 9, 1996; [COMPUTER GRAPHICS PROCEEDINGS (SIGGRAPH)], NEW YORK, NY : ACM, US, 4 August 1996 (1996-08-04), pages 295-302, XP000682745 *
See also references of WO2005017871A1 *
VAN WIJK J J ET AL: "Some issues in designing user interfaces to 3D raster graphics" COMPUTER GRAPHICS FORUM NETHERLANDS, vol. 4, no. 1, January 1985 (1985-01), pages 5-10, XP002606063 ISSN: 0167-7055 *

Also Published As

Publication number Publication date
CN1833271A (en) 2006-09-13
JP2007500395A (en) 2007-01-11
EP1661116A4 (en) 2010-12-01
NZ544937A (en) 2009-03-31
CN1833271B (en) 2010-05-05
WO2005017871A1 (en) 2005-02-24
CA2533451A1 (en) 2005-02-24

Similar Documents

Publication Publication Date Title
Hornung et al. Character animation from 2d pictures and 3d motion data
US7184043B2 (en) Color compensated translucent object rendering methods and apparatus
US7436404B2 (en) Method and apparatus for rendering of translucent objects using volumetric grids
US20060022991A1 (en) Dynamic wrinkle mapping
US8482569B2 (en) Mesh transfer using UV-space
US20090213138A1 (en) Mesh transfer for shape blending
US8988461B1 (en) 3D drawing and painting system with a 3D scalar field
US7995060B2 (en) Multiple artistic look rendering methods and apparatus
US8704823B1 (en) Interactive multi-mesh modeling system
US7176918B2 (en) Three-dimensional paint projection weighting of diffuse and scattered illumination methods and apparatus
US8054311B1 (en) Rig baking for arbitrary deformers
US7443394B2 (en) Method and apparatus for rendering of complex translucent objects using multiple volumetric grids
EP2260403B1 (en) Mesh transfer
DiVerdi A brush stroke synthesis toolbox
WO2005017871A1 (en) Improved paint projection method and apparatus
US8669980B1 (en) Procedural methods for editing hierarchical subdivision surface geometry
Barroso et al. Automatic Intermediate Frames for Stroke-based Animation
Thoma et al. Non-Photorealistic Rendering Techniques for Real-Time Character Animation
Klein An image-based framework for animated non-photorealistic rendering
Boubekeur et al. Interactive out-of-core texturing using point-sampled textures
WO2005116929A2 (en) Three-dimensional paint projection weighting of diffuse and scattered illumination
Lee et al. Interactive composition of 3D faces for virtual characters
Sousa et al. An Advanced Color Representation for Lossy
Duce et al. A Formal Specification of a Graphics System in the
Zhou Application of 3D facial animation techniques for Chinese opera

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060209

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20101104

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 17/40 20060101ALI20101022BHEP

Ipc: G09G 5/00 20060101AFI20050304BHEP

Ipc: G06T 13/00 20060101ALI20101022BHEP

Ipc: G06T 11/00 20060101ALI20101022BHEP

17Q First examination report despatched

Effective date: 20110405

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20140430