EP1661116A1 - Procede de projection de peinture ameliore et dispositif correspondant - Google Patents

Procede de projection de peinture ameliore et dispositif correspondant

Info

Publication number
EP1661116A1
EP1661116A1 EP04801835A EP04801835A EP1661116A1 EP 1661116 A1 EP1661116 A1 EP 1661116A1 EP 04801835 A EP04801835 A EP 04801835A EP 04801835 A EP04801835 A EP 04801835A EP 1661116 A1 EP1661116 A1 EP 1661116A1
Authority
EP
European Patent Office
Prior art keywords
dimensional
view
image
dimensional object
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP04801835A
Other languages
German (de)
English (en)
Other versions
EP1661116A4 (fr
Inventor
Thomas Hahn
Rick Sayre
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixar
Original Assignee
Pixar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixar filed Critical Pixar
Publication of EP1661116A1 publication Critical patent/EP1661116A1/fr
Publication of EP1661116A4 publication Critical patent/EP1661116A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/016Exploded view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present invention relates to computer animation. More specifically, the present invention relates to enhanced methods and apparatus for specifying surface properties of animation objects.
  • Stop motion-based animation techniques typically required the construction of miniature sets, props, and characters. The filmmakers would construct the sets, add props, and position the miniature characters in a pose. After the animator was happy with how everything was arranged, one or more frames of film would be taken of that specific arrangement. Stop motion animation techniques were developed by movie makers such as Willis O'Brien for movies such as "King Kong” (1932). Subsequently, these techniques were refined by animators such as Ray Harryhausen for movies including "The Mighty Joe Young” (1948) and Clash Of The Titans (1981).
  • Pixar One of the pioneering companies in the computer aided animation (CAA) industry was Pixar, dba Pixar Animation Studios. Over the years, Pixar developed and offered both computing platforms specially designed for CAA, and Academy- Award® winning rendering software known as RenderMan®.
  • CAA computer aided animation
  • Pixar has also developed software products and software environments for internal use allowing users (modelers) to easily define object rigs and allowing users (animators) to easily animate the object rigs. Based upon such real-world experience, the inventors of the present invention have determined that additional features could be provided to such products and environments to facilitate the object definition and animation process.
  • One such feature includes methods and apparatus for facilitating the definition of surface properties of objects.
  • the present invention relates to computer animation. More specifically, the present invention relates to methods and apparatus allowing a user to specify surface parameters of an object or portions of an object that are in different poses.
  • Embodiments of the present invention are used to help manage the process of the creating of three dimensional "paintings.”
  • Embodiments control the definition of multiple poses, manage the rendering of views, provide the mechanism for transmitting texture information to the surface materials, provide cataloging and source control for textures and other data files, and the like.
  • a user can effectively paint "directly” onto a three dimensional object using any conventional two dimensional painting program.
  • the painting program relies on "layers.”
  • the user paints a number of two dimensional paintings (e.g. overlay images), on different views of the object. Typical views are cameras with orientation of "front,” “back,” and the like.
  • the user can re-pose the object model, in multiple configurations, if the model is too hard to paint fully in a single reference pose. Additionally, the user can paint a number of overlay images of views of the reposed object.
  • a typical workflow of embodiments of the present invention includes: loading an object model into the system and posing the object model in different configurations. For example to paint a table the user may have one pose that defines the table and another pose that "explodes" the model by translating the table legs away from the bottom of the table. Next, the workflow may include creating or defining one or more views to paint on the model in the different poses.
  • a rendering pass is performed on the object in the different poses and in the defined views.
  • the results of the rendering pass are typically bitmap images, views, of the rendered object and associated depth maps of the rendered surfaces.
  • the workflow may also include the user loading the rendered bitmaps into a two dimensional paint program and painting one or more passes representing color, displacement, or the like.
  • the system computes at render time the result of a planar projection (reverse map) of the each object in each pose to each views and stores the resulting 2D coordinates of every visible surface point.
  • the surface shader will use these stored 2D coordinates for evaluating surface parameters, such as 2D texture maps, for each pass.
  • the values returned by the pass computation are then used to produce different effects in the shader like coloring or displacing the surfaces affected by paint.
  • non-planar projections such as perspective projections are used.
  • the depth map is evaluated during the planar projection phase process to ensure that only the foremost surface relative to the projected view receives the paint. Additionally, the surface normals are taken into consideration during the projection process to avoid projecting paint onto surfaces that are perpendicular or facing away the projection view.
  • the surface shader finishes its computation.
  • the resulting rendered object model is typically posed in a different pose from the poses described above.
  • the rendered object model is typically rendered in context of a scene, and the rendered scene is stored in memory. At a later time, the rendered scene is typically retrieved from memory and displayed to a user, hi various embodiments, the memory may be a hard disk drive, RAM, DVD-ROM, CD-ROM, film media, print media, and the like.
  • embodiments of the present invention allow users to pose articulated three-dimensional object models in multiple configurations for receiving projected paint from multiple views. These embodiments increase the efficiency and effectiveness of applying surface parameters, such as multiple texture maps, colors, and the like, onto surfaces of complex deformable three dimensional object models.
  • Advantages of embodiments of the present invention include the capability to allow the user to paint any three dimensional object model from multiple viewpoints and multiple pose configurations of the object.
  • the concept of multiple pose configurations allows the user to paint in areas that may not be directly accessible unless the model is deformed or decomposed into smaller pieces.
  • Embodiments of the present invention introduce unique techniques for organizing the multiple view/poses and for applying the resulting texture maps back onto the object. More specifically, the embodiments selectively control which surfaces receive paint using the surface orientation (normals) and depth maps rendered from the projecting views.
  • a method for a computer system includes posing at least a portion of a three-dimensional object model in a first configuration, determining a first two-dimensional view of at least the portion of the three-dimensional object model while in the first configuration, posing the portion of the three-dimensional object model in a second configuration, and determining a second two- dimensional view of the portion of the three-dimensional object model while in the second configuration.
  • Various techniques also include associating a first two-dimensional image with the first two-dimensional view of at least the portion of the object model, and associating a second two-dimensional image with the second two-dimensional view of the portion of the object model.
  • the process may also include associating a first set of surface parameters with a surface of at least the portion of the three-dimensional object model that is visible in the first two-dimensional view in response to the first two-dimensional image and in response to the first configuration for at least the portion of the three-dimensional object model, and associating a second set of surface parameters with a surface of the portion of the three-dimensional object model that is visible in the second two-dimensional view in response to the second two-dimensional image and in response to the second configuration for the portion of the three-dimensional object model.
  • a computer program product for a computer system including a processor includes code that directs the processor to receive a first configuration for at least a portion of a three- dimensional object, code that directs the processor to determine a first two-dimensional image, wherein the first two-dimensional image exposes a surface of at least the portion of the three-dimensional object in the first configuration, code that directs the processor to receive a second configuration for at least the portion of the three-dimensional object, and code that directs the processor to determine a second two-dimensional image, wherein the second two-dimensional image exposes a surface of at least the portion of the three- dimensional object in the second configuration.
  • Additional computer code may include code that directs the processor to receive a first two-dimensional paint image, wherein the first two-dimensional paint image is associated with the first two-dimensional image, and code that directs the processor to receive a second two-dimensional paint image, wherein the second two-dimensional paint image is associated with the second two-dimensional image.
  • the code may also include code that directs the processor to determine a first group of parameters in response to the first two-dimensional paint image, wherein the first group of parameters is associated with the surface of at least the portion of the three-dimensional object in the first configuration, and code that directs the processor to determine a second group of parameters in response to the second two-dimensional paint image, wherein the second group of parameters is associated with the surface of at least the portion of the three- dimensional object in the second configuration.
  • the codes may include machine readable or human readable code on a tangible media. Typical media includes a magnetic disk, an optical disk, or the like.
  • a computer system typically includes a display, a memory, and a processor.
  • the memory is configured to store a model of a three-dimensional object, a first pose and a second pose for the three-dimensional object, a first two- dimensional image and a second two dimensional-image, and surface shading parameters associated with a surface of the three-dimensional object.
  • the processor is typically configured to output a first view of the three-dimensional object in the first pose to the display, configured to output a second view of the three-dimensional object in the second pose to the display, and configured to receive the first two-dimensional image and to receive the second two-dimensional image.
  • the processor may also be configured to determine a first set of surface parameters associated with surfaces of the three-dimensional object in response to the first view of the three-dimensional object and in response to the first two-dimensional image, and configured to determine a second set of surface parameters associated with additional surfaces of the three-dimensional object in response to the second view of the three-dimensional object and in response to the second two-dimensional image.
  • FIG. 1 illustrates a block diagram of a system according to one embodiment of the present invention
  • FIG. 2 illustrates a block diagram of an embodiment of the present invention
  • FIGs. 3A-B illustrates a flow process according to an embodiment of the present invention
  • FIG. 4 illustrates an example of an embodiment
  • FIGs. 5A-C illustrate one example of an embodiment of the present invention
  • FIGs. 6A-D illustrate another example of an embodiment of the present invention.
  • FIGs. 7A-C illustrate another example of an embodiment of the present invention.
  • Gprim geometric primitive
  • Bspline parametric function
  • Model Object Model: a collection of Gprims organized in an arbitrary number of faces (polygon meshes and subdivision surfaces), implicit surfaces or the like. The system does not require a 2D surface parameterization to perform its operations.
  • View an orthographic or perspective camera that can generate an image of the model from a specific viewpoint.
  • Pose the state of the model in terms of specific rigid transformations in its hierarchy and specific configuration of its Gprims. A pose also describes the state of one or more views.
  • a pose typically includes the position and orientation of both a model and all the view cameras.
  • a pose specifies a particular configuration or orientation of more than one object with in an object model.
  • a pose may specify that two objects are a particular distance from each other, or that two objects are a particular angle with respect to each other, or the like. Examples of different poses of objects will be illustrated below.
  • Pass type of painting, for example "color” or "displacement", along with the number of color channels that are to be used in the pass.
  • the name provides a handle with which to reference a set of paintings within the shader. Typically the names are arbitrary.
  • FIG. 1 is a block diagram of typical computer system 100 according to an embodiment of the present invention.
  • computer system 100 typically includes a monitor 110, computer 120, a keyboard 130, a user input device 140, a network interface 150, and the like.
  • user input device 140 is typically embodied as a computer mouse, a trackball, a track pad, wireless remote, drawing tablet, an integrated display and tablet (e.g. Cintiq by Wacom), voice command system, eye tracking system, and the like.
  • User input device 140 typically allows a user to select objects, icons, text and the like that appear on the monitor 110.
  • Embodiments of network interface 150 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, and the like.
  • Network interface 150 are typically coupled to a computer network as shown, hi other embodiments, network interface 150 may be physically integrated on the motherboard of computer 120, may be a software program, such as soft DSL, or the like.
  • Computer 120 typically includes familiar computer components such as a processor 160, and memory storage devices, such as a random access memory (RAM) 170, disk drives 180, and system bus 190 interconnecting the above components.
  • processor 160 and memory storage devices, such as a random access memory (RAM) 170, disk drives 180, and system bus 190 interconnecting the above components.
  • RAM random access memory
  • computer 120 is a PC compatible computer having one or more microprocessors such as PentiumlVTM or XeonTM microprocessors from Intel Corporation. Further, in the present embodiment, computer 120 typically includes a LINUX-based operating system.
  • RAM 170 and disk drive 180 are examples of tangible media for storage of data, audio / video files, computer programs, scene descriptor files, object data files, overlay images, depth maps, shader descriptors, a rendering engine, a shading engine, output image files, texture maps, displacement maps, painting environment, object creation environments, animation environments, surface shading environment, asset management systems, databases and database management systems, and the like.
  • Other types of tangible media include floppy disks, removable hard disks, optical storage media such as CD-ROMS, DVDs and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like.
  • computer system 100 may also include software that enables communications over a network such as the HTTP, TCP/IP, RTP/RTSP protocols, and the like, h alternative embodiments of the present invention, other communications software and transfer protocols may also be used, for example IPX, UDP or the like.
  • Fig. 1 is representative of computer rendering systems capable of embodying the present invention. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention.
  • the computer may be a desktop, portable, rack-mounted or tablet configuration.
  • other micro processors are contemplated, such as PentiumTM or ItaniumTM microprocessors; OpteronTM or Athlon-XPTM microprocessors from Advanced Micro Devices, Inc; PowerPC G4TM, G5TM microprocessors from Motorola, Inc.; and the like.
  • Windows® operating system such as WindowsXP®, WindowsNT®, or the like from Microsoft Corporation
  • Solaris from Sun Microsystems
  • LINUX Sun Microsystems
  • UNIX UNIX
  • MAC OS Apple Computer Corporation
  • Fig. 2 illustrates a block diagram of an embodiment of the present invention. Specifically, Fig. 2 illustrates an animation environment 200, an object creation environment 210, and a storage system 220.
  • object creation environment 210 is an environment that allows users (modellers) to specify object articulation models, including armatures and rigs. Within this environment, users can create models (manually, procedurally, etc.) of objects and specify how the objects articulate with respect to animation variables (Avars), hi one specific embodiment, object creation environment 210 is a Pixar proprietary object creation environment known as "Gepetto.” In other embodiements, other types of object creation environments can be used.
  • object creation environment 210 may also be used by users (shaders) to specify surface parameters of the object models.
  • an environment may be provided within object creation environment 210, or separately, that allows users to assign parameters to the surfaces of the object models via painting.
  • the surface parameters include color data, texture mapping data, displacement data, and the like. These surface parameters are typically used to render the object within a scene.
  • the environment allows the user to define poses for the object model. Additionally, it allows the user to render views of the object model in the different poses.
  • the environment also provide the mechanism to perform planar projections (with possible use of depth maps and surface normals) on "reference" poses, also known as Pref, while shading the and rendering the standard object configuration, and maintains association among the different views, different poses, different paint data, different surface parameter data, and the like, as will be described below.
  • the object models that are created with object creation environment 210 may also be used in animation environment 200.
  • object models are heirarchically built. The heirarchical nature for building-up object models is useful because different users (modellers) are typically assigned the tasks of creating the different models. For example, one modeller is assigned the task of creating a hand model 290, a different modeller is assigned the task of creating a lower arm model 280, and the like.
  • animation environment 200 is an environment that allows users (animators) to manipulate object articulation models, via the animation variables (Avars).
  • animation environment 200 is a Pixar proprietary animation enviroment known as "MenN," although in other embodiments, other animation environments could also be adapted.
  • animation environment 200 allows an animator to manipulate the Avars provided in the object models (generic rigs) and to move the objects with resspect to time, i.e. animate an object.
  • animation environment 200 and object cration environment 210 maybe combined into a single integrated environment.
  • storage system 220 may include any organized and repeatable way to access object articulation models.
  • storage system 220 includes a simple flat-directory structure on local drive or network drive; in other embodiments, storage system 220 may be an asset management system or a database access system tied to a database, or the like.
  • storage system 220 receives references to object models from animation environment 200 and object creation environment 210. In return, storage system 220 provides the object model stored therein.
  • Storage system 220 typically also stores the surface shading parameters, overlay images, depth maps, etc. discussed herein.
  • Pixar's object creation environment allowed a user to paint and project images (textures) from multiple views onto an object model in a specific configuration (pose).
  • the inventors of the present invention recognized the object creation environment did not support views of objects in different poses and that it was difficult to apply textures on complicated three dimensional models in a single pose.
  • Figs. 3A-B illustrates a flow process according to an embodiment of the present invention.
  • a three dimensional object model is provided, step 300.
  • one or more users specify a geometric representation of one or more objects via an object creation environment. Together, these objects are combined to form a model of a larger object, hi the present embodiment, the modeler may use an object creation environment such as Gepetto, or the like.
  • object creation environment such as Gepetto, or the like.
  • additional users specify how the surface of the objects should appear.
  • such users shadeers
  • specify any number of surface effects of the object such as base color, scratches, dirt, displacement maps, roughness and shininess maps, transparency and control over material type.
  • the user takes the three dimensional object model and specifies an initial pose, step 310.
  • this step need not be specifically performed, as objects have default poses.
  • an object model for a character such as an automobile may have a default pose with its doors closed.
  • the user typically specifies one or more view camera position, hi other embodiments, a number of default cameras may be used for each object.
  • commonly specified projection views include a top view, a left side view, a right side view, a bottom view, and the like. Additionally, the camera may be an oblique view, or the like.
  • Projection views may be planar, but may also be non-planar and projective. As examples, perspective projections are contemplated, where curved projectors map an image onto a curved surface.
  • the computer system renders a two dimensional view of the three-dimensional object in the first pose, step 320. More specifically, the system renders each view by using the object model, the pose, and view camera data.
  • the rendering pass may be a high quality rendering via a rendering program such as Pixar's Renderman product.
  • the rendering / shading process may be performed with a low quality rendering process, such as GL, and GPU hardware and software Tenderers.
  • each rendered view may be stored as individual view image files, or combined into a larger file.
  • a depth map is also generated, from which the planar projection function, described below, utilizes.
  • the system displays the one or more rendered views of the object, step 330.
  • this step occurs in a user environment that allows the user to graphically assign pixel values on a two dimensional image. Commonly, such environments are termed to include "paint" functionality.
  • the one or more views can be simultaneously displayed to the user.
  • a user assigns pixel values with the views of the object, step 340.
  • the use performs this action by graphically painting "on top" of the views of the object.
  • the painting is analogous to a child painting or coloring an image in a coloring book.
  • the user applies different brushes onto the views of the object, using an overlay layer or the like.
  • the use of mechanisms analogous to "layers" is contemplated herein.
  • the different brushes may have one or more gray scale values, one or more colors, and the like.
  • the user may use a fine black brush to draw a crack-type pattern in an overlay layer of the view.
  • the user may use a spray paint-type brush to darken selected portions in an overlay layer of the view.
  • the user may use a paint brush to color an overlay layer of the view.
  • other ways to specify an overlay layer image are also contemplated, such as the application of one or more gradients to the image, the application of manipulations limited to specific portions of the overlay layer image (e.g. selections), the inclusion of one or more images into an overlay layer image (e.g. a decal layer), and the like.
  • the overlay layer image for each view is then stored, step 350.
  • the overlay layer images are stored in separate and identifiable files from the two dimensional view.
  • the overlay layer image is stored in a layer of the two dimensional view, or the like.
  • the file including the overlay layer image is also associated with the pose defined in step 310, and depth map determined in step 320, step 360.
  • the user may decide to re-pose the three-dimensional object in a second pose, step 370.
  • the process described above is then repeated.
  • the process of re-posing the three-dimensional object, creating one or more views, and depth maps painting on top of the views, etc. can be repeated for as many poses the user deems necessary.
  • a character object one pose may be the character with an opened mouth and arms up, and another pose may be the character with a closed mouth and arms down.
  • one pose may be the folding table unfolded, and another pose may be the folding table with its legs "exploded" or separated from the table top.
  • a user may see views derived from different poses of the three-dimensional object on the screen at the same time. Accordingly, the process of viewing and painting described above need not be performed only based upon one pose of the object at a time. Additionally, the user may paint on top of views of the object from different poses in the same session. For example, for a character posed with its mouth open, the user may paint white on a layer on a view showing the character's mouth, then the user may paint black on a layer of a view showing the character's hair, then the user may repaint a different shade of white on the layer on the view showing the character's mouth, and the like.
  • the next step is to associate values painted on each view of the three-dimensional object in the first pose back to the object, step 380.
  • each view of the object is typically a projection of surfaces of the three- dimensional object in the first pose into two-dimensions. Accordingly, portions of that appear to be "painted upon" by the overlay image are projected back to the tliree-dimensional object using the associated depth map.
  • This functionality is enabled because the system maintains a linkage among the overlay image, the view, and the pose of the three-dimensional object, hi cases where there are multiple rendered views, the paint is projected back for each rendered view to the three-dimensional object in the first pose.
  • surface normals may be used to "feather" the effect of the projection onto surfaces of the three-dimensional object.
  • the paint effect may be calculated to be -100%; whereas for a surface that is at a 30 degree angle to the projection view, the paint effect may be calculated to be -50% (sin(30)); whereas for a surface that is at a 60 degree angle to the projection view, the paint effect may calculated to be -13% (sin(60)); and the like.
  • the amount of feathering may be adjusted by the user, hi other embodiments, feathering may be used to vary the paint effect at transition areas, such as the edges or borders of the object, and the like. In various embodiments, feathering of the projected paint, reduces smearing of the projected paint.
  • FIG. 4 illustrates an example of an embodiment.
  • a three- dimensional cylinder 500 appears as a rectangle 510 in a two-dimensional view 520 of cylinder 500.
  • a user paints an overlay image 530 on top of view 520.
  • the user paints the bottom half of cylinder 500 black.
  • the overlay image is projected to the three-dimensional cylinder, accordingly, the model of the front bottom surface 540 of cylinder 500 is associated with the property or color of black and feathered as the surface normal points away from viewing plane.
  • the back bottom surface 550 of cylinder 500 is not associated with the color black, as it was not exposed in view 520.
  • a back view 560 and a bottom view 570 of cylinder 500 could be specified to expose remaining bottom half surfaces of cylinder 500.
  • each view of the object is typically a projection of surfaces of the three-dimensional object in the second pose into two-dimensions. Accordingly, portions of that appear to be "painted upon" by the overlay image are projected to the three-dimensional object using the associated depth map. Again, in cases where there are multiple rendered views, the paint is projected back for each rendered view to the three-dimensional object in the second pose.
  • the planar projections from step 380 and step 390 are combined and both projected back to the surface of the three-dimensional object, step 400.
  • the users may paint upon the rendered views of the three-dimensional object that are in different poses, and have the paint data be projected back to a single three-dimensional object in a neutral pose.
  • the inventors of the present invention believe that the above functionality is significant as it allows the user to "paint" hard-to-reach portions of a three-dimensional object by allowing the user to repose the three-dimensional object, and allowing the user to paint upon the resulting rendered view.
  • one pose may be a character with their mouth closed, and another with the mouth open. Further, examples of the use of embodiments of the present invention will be illustrated below.
  • this step 400 can be performed before a formal rendering of the three-dimensional object, i other embodiments, step 400 occurs dynamically during a formal rendering process.
  • the data from steps 380 and 390 may be maintained in separate files.
  • the system dynamically combines the planar projection data from the three-dimensional object in the first pose with the planar projection data from the three-dimensional object in the second pose.
  • the combined planar projection data is then used to render the three-dimensional object in typically a third pose, step 410.
  • the first pose may be a character with both the arms down
  • the second pose may be the character with both the arms up
  • the third pose may be the character with only one arm up.
  • the paint data may specify any number of properties for the surface of the object. Such properties are also termed shading pass data. For a typical object surface, there may be more than one hundred shading passes.
  • the paint data may specify surface colors, application of texture maps, application of displacement maps, and the like.
  • the planar projections from steps 380 and 390 may apply the same properties or different properties to the surface of the object.
  • step 380 may be a surface "crack" pass
  • step 390 may be a surface color pass.
  • the object is rendered at the same time as other objects in a scene.
  • the rendered scene is typically another two-dimensional image that is then stored, step 420.
  • the rendered scene can be stored in optical form such as film, optical disk (e.g. CD-ROM, DND), or the like; magnetic form such as a hard disk, network drive, or the like, electronic form such as an electronic signal, a data packet, or the like.
  • optical form such as film, optical disk (e.g. CD-ROM, DND), or the like
  • magnetic form such as a hard disk, network drive, or the like
  • electronic form such as an electronic signal, a data packet, or the like.
  • the representation of the resulting rendered scene may later be retrieved and displayed to one or more viewers, step 430.
  • FIGs. 5A-C illustrate one example of an embodiment of the present invention. Specifically, Fig. 5 A illustrates a tliree-dimensional model of a box 600 in a closed pose. In Fig. 5B, a number of two-dimensional views of box 600 are illustrated, including a front view 610, a top view 620, and a side view 630.
  • a user creates overlay images 640-660 on top of views 610-630, respectively. As discussed above, the user typically paints on top of view 610-630 to create overlay images 640-660.
  • Fig. 5D illustrates a three-dimensional model of box 670 in the closed pose after overlay images 640-660 are projected back to the three-dimensional model of box 600 in the closed pose.
  • FIGs. 6A-D illustrate another example of an embodiment of the present invention. Specifically, Fig. 6A illustrates a tliree-dimensional model of a box 700 in a open pose. In Fig. 6B, two-dimensional views of box 700 are illustrated, including a top view 710, a first cross-section 720, and a second cross-section 730.
  • a user creates an overlay images 740-760 on top of views 710-730, respectively. Again, the user typically paints on top of the respective views to create overlay images.
  • Fig. 6D illustrates a three-dimensional model of box 770 in the open pose after overlay images 740-760 are projected back to the three-dimensional model of box 700 in the open pose.
  • the three-dimensional model of box 670 and of box 770 are then combined into a single three-dimensional model. Illustrated in Fig. 6E is a single three-dimensional model of a box 780 including the projected back data from Fig. 5C and 6C. As shown in Fig. 6E, the three-dimensional model may be posed in a pose different from the pose in Fig. 5 A or Fig. 6A.
  • FIGs. 7A-C illustrate another example of an embodiment of the present invention. More specifically, Fig. 7A illustrates a three-dimensional model of a stool in a default pose 800. In Fig. 7B, a number of views 810 of stool 800 in the default pose are illustrated. In this example, the user can paint upon views 810, as described above. Fig. 7C then illustrates the three-dimensional model of the stool in a second pose 820. As can be seen, the legs 830 of the stool are "exploded" or separated from the sitting surface. A number of views 840 are illustrated. In this example, it can be seen that with views 840, the user can more easily paint the bottom of the sitting surface 850 and the legs 860 of the stool.
  • the painting functions may be provided integral to an object creation environment, a separate shading environment, a third- party paint program (e.g. Photoshop, Maya, Softimage), and the like.
  • a third- party paint program e.g. Photoshop, Maya, Softimage
  • Some embodiments described above use planar projection techniques to form views of object and to project back the overlay layer to the three dimensional object.
  • Other embodiments may also use non- planar projection techniques, to form perspective views of an object, and to project back to the three dimensional object.
  • the process of painting in an overlay layer and performing a planar projection back to the three dimensional object may be done in real-time or near real time for multiple poses of the object.
  • the user may be presented with a first view of the object in the first pose, and a first of the object in a second pose.
  • the user paints in an overlay layer of the first view.
  • a planar projection process occurs that projects the paint back to the three dimensional object.
  • the system re-renders the first view of the object in the first pose and also the second view of the object in the second pose.
  • the process occurs very quickly, the user can see the effect of the specification of surface parameters on the object in one pose, in all other poses (views of other poses).
  • the above embodiments disclose a method for a computer system, a computer system capable of performing the disclosed methods. Additional embodiments include computer program products on tangible media including software code that allows the computer system to perform the disclosed methods, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

L'invention concerne un procédé pour système informatique qui consiste à établir un modèle 3D selon une première configuration, à déterminer une première vue 2D de ce modèle, à établir le même modèle selon une seconde configuration, à déterminer une seconde vue 2D correspondante, puis à associer une première image 2D à la première vue 2D, et une seconde image 2D à la seconde vue 2D, et à associer ensuite une première série de paramètres de surface au modèle 3D visible dans la première vue 2D en réponse à la première image 2D et à la première configuration pour le modèle 3D, et enfin à associer une seconde série de paramètres de surface à une surface du modèle 3D visible dans la seconde vue 2D en réponse à la seconde image 2D et à la seconde configuration pour le modèle 3D.
EP04801835A 2003-07-29 2004-03-23 Procede de projection de peinture ameliore et dispositif correspondant Withdrawn EP1661116A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US49116003P 2003-07-29 2003-07-29
PCT/US2004/008993 WO2005017871A1 (fr) 2003-07-29 2004-03-23 Procede de projection de peinture ameliore et dispositif correspondant

Publications (2)

Publication Number Publication Date
EP1661116A1 true EP1661116A1 (fr) 2006-05-31
EP1661116A4 EP1661116A4 (fr) 2010-12-01

Family

ID=34193093

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04801835A Withdrawn EP1661116A4 (fr) 2003-07-29 2004-03-23 Procede de projection de peinture ameliore et dispositif correspondant

Country Status (6)

Country Link
EP (1) EP1661116A4 (fr)
JP (1) JP2007500395A (fr)
CN (1) CN1833271B (fr)
CA (1) CA2533451A1 (fr)
NZ (1) NZ544937A (fr)
WO (1) WO2005017871A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080284798A1 (en) * 2007-05-07 2008-11-20 Qualcomm Incorporated Post-render graphics overlays
US8334872B2 (en) * 2009-05-29 2012-12-18 Two Pic Mc Llc Inverse kinematics for motion-capture characters
US8922558B2 (en) * 2009-09-25 2014-12-30 Landmark Graphics Corporation Drawing graphical objects in a 3D subsurface environment
KR101640904B1 (ko) 2012-02-07 2016-07-19 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 온라인 게이밍 경험을 제공하기 위한 컴퓨터 기반 방법, 기계 판독가능 비일시적 매체 및 서버 시스템
CN111145358B (zh) * 2018-11-02 2024-02-23 北京微播视界科技有限公司 图像处理方法、装置、硬件装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729704A (en) * 1993-07-21 1998-03-17 Xerox Corporation User-directed method for operating on an object-based model data structure through a second contextual image
US6268865B1 (en) * 1998-01-13 2001-07-31 Disney Enterprises, Inc. Method and apparatus for three-dimensional painting
US20030048277A1 (en) * 2001-07-19 2003-03-13 Jerome Maillot Dynamically adjusted brush for direct paint systems on parameterized multi-dimensional surfaces

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996034365A1 (fr) * 1995-04-25 1996-10-31 Cognitens Ltd. Appareil et procede pour recreer et manipuler un objet en 3d en fonction d'une projection en 2d de celui-ci
US6052669A (en) * 1997-06-06 2000-04-18 Haworth, Inc. Graphical user interface supporting method and system for remote order generation of furniture products
CN1161714C (zh) * 1999-08-04 2004-08-11 凌阳科技股份有限公司 一种以平行扫描线为处理单元的三维图形处理器及其绘图方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729704A (en) * 1993-07-21 1998-03-17 Xerox Corporation User-directed method for operating on an object-based model data structure through a second contextual image
US6268865B1 (en) * 1998-01-13 2001-07-31 Disney Enterprises, Inc. Method and apparatus for three-dimensional painting
US20030048277A1 (en) * 2001-07-19 2003-03-13 Jerome Maillot Dynamically adjusted brush for direct paint systems on parameterized multi-dimensional surfaces

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HANRAHAN P ET AL: "Direct WYSIWYG painting and texturing on 3D shapes" COMPUTER GRAPHICS, vol. 4, no. 24, 1990, pages 215-223 ORD, XP002076094 ISSN: 0097-8930 *
PEDERSEN H K ED - ASSOCIATION FOR COMPUTING MACHINERY: "A FRAMEWORK FOR INTERACTIVE TEXTURING ON CURVED SURFACES" COMPUTER GRAPHICS PROCEEDINGS 1996 (SIGGRAPH). NEW ORLEANS, AUG. 4 - 9, 1996; [COMPUTER GRAPHICS PROCEEDINGS (SIGGRAPH)], NEW YORK, NY : ACM, US, 4 August 1996 (1996-08-04), pages 295-302, XP000682745 *
See also references of WO2005017871A1 *
VAN WIJK J J ET AL: "Some issues in designing user interfaces to 3D raster graphics" COMPUTER GRAPHICS FORUM NETHERLANDS, vol. 4, no. 1, January 1985 (1985-01), pages 5-10, XP002606063 ISSN: 0167-7055 *

Also Published As

Publication number Publication date
EP1661116A4 (fr) 2010-12-01
CN1833271A (zh) 2006-09-13
JP2007500395A (ja) 2007-01-11
WO2005017871A1 (fr) 2005-02-24
CA2533451A1 (fr) 2005-02-24
CN1833271B (zh) 2010-05-05
NZ544937A (en) 2009-03-31

Similar Documents

Publication Publication Date Title
Hornung et al. Character animation from 2d pictures and 3d motion data
US9142056B1 (en) Mixed-order compositing for images having three-dimensional painting effects
US7436404B2 (en) Method and apparatus for rendering of translucent objects using volumetric grids
US7184043B2 (en) Color compensated translucent object rendering methods and apparatus
US20060022991A1 (en) Dynamic wrinkle mapping
US8482569B2 (en) Mesh transfer using UV-space
US8988461B1 (en) 3D drawing and painting system with a 3D scalar field
US20090213138A1 (en) Mesh transfer for shape blending
US7995060B2 (en) Multiple artistic look rendering methods and apparatus
US8704823B1 (en) Interactive multi-mesh modeling system
US7176918B2 (en) Three-dimensional paint projection weighting of diffuse and scattered illumination methods and apparatus
US8054311B1 (en) Rig baking for arbitrary deformers
US7443394B2 (en) Method and apparatus for rendering of complex translucent objects using multiple volumetric grids
EP2260403B1 (fr) Transfert de maille
DiVerdi A brush stroke synthesis toolbox
EP1661116A1 (fr) Procede de projection de peinture ameliore et dispositif correspondant
US8669980B1 (en) Procedural methods for editing hierarchical subdivision surface geometry
Barroso et al. Automatic Intermediate Frames for Stroke-based Animation
Thoma et al. Non-Photorealistic Rendering Techniques for Real-Time Character Animation
Klein An image-based framework for animated non-photorealistic rendering
Boubekeur et al. Interactive out-of-core texturing using point-sampled textures
WO2005116929A2 (fr) Ponderation de la projection de peinture tridimensionnelle de procedes d'eclairage diffus et diffuse et appareil associe
Lee et al. Interactive composition of 3D faces for virtual characters
Sousa et al. An Advanced Color Representation for Lossy
Duce et al. A Formal Specification of a Graphics System in the

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060209

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20101104

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 17/40 20060101ALI20101022BHEP

Ipc: G09G 5/00 20060101AFI20050304BHEP

Ipc: G06T 13/00 20060101ALI20101022BHEP

Ipc: G06T 11/00 20060101ALI20101022BHEP

17Q First examination report despatched

Effective date: 20110405

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20140430