WO2013109246A1 - Gestures and tools for creating and editing solid models - Google Patents

Gestures and tools for creating and editing solid models Download PDF

Info

Publication number
WO2013109246A1
WO2013109246A1 PCT/US2012/021448 US2012021448W WO2013109246A1 WO 2013109246 A1 WO2013109246 A1 WO 2013109246A1 US 2012021448 W US2012021448 W US 2012021448W WO 2013109246 A1 WO2013109246 A1 WO 2013109246A1
Authority
WO
WIPO (PCT)
Prior art keywords
modeling
tool
gesture
computer
touch event
Prior art date
Application number
PCT/US2012/021448
Other languages
French (fr)
Inventor
Gregory W. FOWLER
Vincent Ma
Hans-Frederick Brown
Original Assignee
Autodesk, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autodesk, Inc. filed Critical Autodesk, Inc.
Priority to PCT/US2012/021448 priority Critical patent/WO2013109246A1/en
Publication of WO2013109246A1 publication Critical patent/WO2013109246A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the present invention relates generally to three-dimensional (3D) modeling, and in particular, to a method, apparatus, and article of manufacture for dynamically creating and modeling/editing a solid model (e.g., on a multi-touch device).
  • creation tools e.g., extrude, revolve, offset, etc.
  • creation tools are all static modality tools or commands that require proper selection.
  • the tools are detached from the 3D navigation experience and do not fully take advantage of the multi touch input devices.
  • Embodiments of the invention provide a 3D contriver tool that provides the ability for a user to dynamically (in real-time) create and reshape a 3D geometric form on a multi-touch device based on a finger input gesture. Further, embodiments of the invention introduce new multi-touch gestures and interactions that combine multiple concepts into a simple, predictable workflow that mimics how brushes are used on an empty canvas. By simply touching a designated space, the user can rapidly and dynamically create forms without having to worry about tool sequencing, profiles selection, direction, etc.
  • the user can continue to dynamically adjust the form/geometry without having to launch an edit tool or invoke a special mode.
  • the system detects a modification operation, automatically switches to that operational mode, and allows the user to dynamically reshape the form.
  • embodiments of the invention introduce a "soft" 3D navigation (tumbling) activation/deactivation method that does not require the usage of multi- finger gestures or special modes.
  • This transient navigation consists of tracking multi- touch inputs outside a virtual modeling box/plane that provides a 3D modeling environment that flows naturally without enforcing mode/tool switching or difficult clutch gestures to learn.
  • Embodiments of the invention enable a tool to perform the above described functionality.
  • a tool may also be "loaded” with more complex geometry to help assemble intricate forms.
  • the tool can automatically recognize connection conditions and blends the "stroked" geometry onto an existing form in a natural manner. In this interaction, the user can also continue adjusting the shape and scaling its outline.
  • the original geometries can be pre-defined with a simple rigging system that the user can adjust during the creation interaction.
  • All of the above functionality is presented to the user as a single tool that is highly context sensitive and used in a dynamic manner to both create and modify a solid model.
  • the tool exposes all of the above interactions without interfering with 3D navigational activities (e.g., pan, zoom, tumble).
  • FIG. 1 is an exemplary hardware and software environment used to implement one or more embodiments of the invention
  • FIG. 2 schematically illustrates a typical distributed computer system using a network to connect client computers to server computers in accordance with one or more embodiments of the invention
  • FIG. 3 illustrates a visual representation for a grid system (on a multi-touch device) that controls which gestures are either captured as modeling operations or navigational operations, specifically tumbling/orbiting in accordance with one or more embodiments of the invention
  • FIGs. 4A and 4B illustrate a tumbling/orbiting operation where a grid system adapts itself based on the viewing angle in accordance with one or more embodiments of the invention
  • FIG. 5 illustrates an exemplary modeling operation that creates a 3D form based on a modeling operation performed on empty space from within a region in accordance with one or more embodiments of the invention
  • FIG. 6A illustrates a "polygonal complex" in accordance with one or more embodiments of the invention
  • FIG. 6B illustrates both sides of "polygonal complex" in accordance with one or more embodiments of the invention
  • FIG. 6C illustrates using a square as the profile of the polygonal complex base mesh in accordance with one or more embodiments of the invention
  • FIGs. 7A-7D illustrate the generation of a base mesh in accordance with one or more embodiments of the invention.
  • FIGs. 8A-8D illustrate a subdivision surface that is created dynamically consistent with a base mesh in accordance with one or more embodiments of the invention
  • FIG. 9A illustrates an exemplary continued user interaction with the modeling tool and 3D form of FIG. 5(A) in accordance with one or more
  • FIGs. 9B-9E illustrate dynamic re-stroking of the subdivision surface in accordance with one or more embodiments of the invention.
  • FIG. 9F illustrates a users continued interaction with the modeling tool and the model of FIG. 9A in accordance with one or more embodiments of the invention
  • FIG. 9G illustrates an example of a user continuing to interact using a modeling tool with the model form of FIG. 9F in a different plane in accordance with one or more embodiments of the invention
  • FIG. 10 illustrates an example of a user creating a 3D solid form using two fingers simultaneously in accordance with one or more embodiments of the invention
  • FIGs. 11 A and 1 IB illustrate a user brush modeling from a face to add geometry to the face in accordance with one or more embodiments of the invention
  • FIGs. 12A-12C illustrate the re-stroking of the model of FIG. 1 IB in the original plane, the tumbling/orbiting operation, and the re-stroking in the new different plane respectively in accordance with one or more embodiments of the invention
  • FIG. 13 illustrates the user continuing to interact with the modeling tool 302 and the geometry of FIG. 12C in accordance with one or more embodiments of the invention
  • FIGs. 14A-C illustrate the operation of bridging two faces of a 3D form in accordance with one or more embodiments of the invention
  • FIG. 15 illustrates the restroking of the bridge of FIG. 14C in accordance with one or more embodiments of the invention
  • FIGs. 16A and 16B illustrate a mirroring operation performed in accordance with one or more embodiments of the invention.
  • FIG. 17 illustrates the logical flow for using the 3D contriver tool in accordance with one or more embodiments of the invention.
  • Embodiments of the invention provide a multi-touch 3D modeling system that is based on the idea of using life-like drawing tools on a blank canvas.
  • a single tool provides the ability to automatically control creating, positioning, editing, scaling, and posing based on the view direction and multi-touch events. All of these operations are provided within the same context and without exiting the tool for 3D navigation operations.
  • a user is provided with access to a number of modeling interactions (e.g., creating/editing) that dynamically create base geometry, that can be refined and later dynamically sculpted using 3D modeling tools.
  • a 3D contriver tool With the single tool (referred to herein as a 3D contriver tool), the user can explore new 3D creations without requiring special commands or modes.
  • Such an approach maintains the artistic flow that users appreciate from prior art brushing and stroking systems.
  • FIG. 1 is an exemplary hardware and software environment 100 used to implement one or more embodiments of the invention.
  • the hardware and software environment includes a computer 102 and may include peripherals.
  • Computer 102 may be a user/client computer, server computer, or may be a database computer.
  • the computer 102 comprises a general purpose hardware processor 104A and/or a special purpose hardware processor 104B (hereinafter alternatively collectively referred to as processor 104) and a memory 106, such as random access memory (RAM).
  • processor 104 a general purpose hardware processor 104A and/or a special purpose hardware processor 104B (hereinafter alternatively collectively referred to as processor 104) and a memory 106, such as random access memory (RAM).
  • RAM random access memory
  • the computer 102 may be coupled to and/or integrated with other devices, including input/output (I/O) devices such as a keyboard 114, a cursor control device 116 (e.g., a mouse, a pointing device, pen and tablet, touch screen, multi-touch device, etc.) and a printer 128.
  • I/O input/output
  • computer 102 may be coupled to, or may comprise, a portable or media viewing/listening device 132 (e.g., an MP3 player, iPodTM, NookTM, portable digital video player, cellular device, personal digital assistant, etc.).
  • a portable or media viewing/listening device 132 e.g., an MP3 player, iPodTM, NookTM, portable digital video player, cellular device, personal digital assistant, etc.
  • the computer 102 operates by the general purpose processor 104A performing instructions defined by the computer program 110 under control of an operating system 108.
  • the computer program 110 and/or the operating system 108 may be stored in the memory 106 and may interface with the user and/or other devices to accept input and commands and, based on such input and commands and the instructions defined by the computer program 1 10 and operating system 108 to provide output and results.
  • Output/results may be presented on the display 122 or provided to another device for presentation or further processing or action.
  • the display 122 comprises a liquid crystal display (LCD) having a plurality of separately addressable liquid crystals.
  • LCD liquid crystal display
  • the display 122 may comprise a light emitting diode (LED) display having clusters of red, green and blue diodes driven together to form full-color pixels.
  • LED light emitting diode
  • Each liquid crystal or pixel of the display 122 changes to an opaque or translucent state to form a part of the image on the display in response to the data or information generated by the processor 104 from the application of the instructions of the computer program 110 and/or operating system 108 to the input and commands.
  • the image may be provided through a graphical user interface (GUI) module 118 A.
  • GUI graphical user interface
  • the instructions performing the GUI functions can be resident or distributed in the operating system 108, the computer program 110, or implemented with special purpose memory and processors.
  • the display 122 is integrated with/into the computer 102 and comprises a multi-touch device having a touch sensing surface (e.g., track pod or touch screen) with the ability to recognize the presence of two or more points of contact with the surface.
  • multi-touch devices include mobile devices (e.g., iPhoneTM, Nexus STM, DroidTM devices, etc.), tablet computers (e.g., iPadTM, HP TouchpadTM), portable/handheld game/music/video player/console devices (e.g., iPod TouchTM, MP3 players, Nintendo 3DSTM, PlayStation PortableTM, etc.), touch tables, and walls (e.g., where an image is projected through acrylic and/or glass, and the image is then backlit with LEDs).
  • mobile devices e.g., iPhoneTM, Nexus STM, DroidTM devices, etc.
  • tablet computers e.g., iPadTM, HP TouchpadTM
  • portable/handheld game/music/video player/console devices e.g., iPod TouchTM, MP3 players, Nintendo 3
  • Some or all of the operations performed by the computer 102 according to the computer program 110 instructions may be implemented in a special purpose processor 104B.
  • the some or all of the computer program 110 instructions may be implemented via firmware instructions stored in a read only memory (ROM), a programmable read only memory (PROM) or flash memory within the special purpose processor 104B or in memory 106.
  • ROM read only memory
  • PROM programmable read only memory
  • flash memory within the special purpose processor 104B or in memory 106.
  • 104B may also be hardwired through circuit design to perform some or all of the operations to implement the present invention. Further, the special purpose processor
  • the 104B may be a hybrid processor, which includes dedicated circuitry for performing a subset of functions, and other circuits for performing more general functions such as responding to computer program instructions.
  • the special purpose processor is an application specific integrated circuit (ASIC).
  • the computer 102 may also implement a compiler 112 which allows an application program 110 written in a programming language such as COBOL, Pascal,
  • the compiler 112 may be an interpreter that executes
  • source code directly, translates source code into an intermediate representation that is executed, or that executes stored precompiled code.
  • source code may be written in a variety of programming languages such as JavaTM,
  • the application or computer program 110 accesses and manipulates data accepted from I/O devices and stored in the memory 106 of the computer 102 using the relationships and logic that was generated using the compiler 112.
  • the computer 102 also optionally comprises an external communication device such as a modem, satellite link, Ethernet card, or other device for accepting input from and providing output to other computers 102.
  • an external communication device such as a modem, satellite link, Ethernet card, or other device for accepting input from and providing output to other computers 102.
  • instructions implementing the operating system 108, the computer program 110, and the compiler 112 are tangibly embodied in a non-transient computer-readable medium, e.g., data storage device 120, which could include one or more fixed or removable data storage devices, such as a zip drive, floppy disc drive 124, hard drive, CD-ROM drive, tape drive, etc.
  • a non-transient computer-readable medium e.g., data storage device 120, which could include one or more fixed or removable data storage devices, such as a zip drive, floppy disc drive 124, hard drive, CD-ROM drive, tape drive, etc.
  • the operating system 108 and the computer program 1 10 are comprised of computer program instructions which, when accessed, read and executed by the computer 102, causes the computer 102 to perform the steps necessary to implement and/or use the present invention or to load the program of instructions into a memory, thus creating a special purpose data structure causing the computer to operate as a specially programmed computer executing the method steps described herein.
  • Computer program 110 and/or operating instructions may also be tangibly embodied in memory 106 and/or data communications devices 130, thereby making a computer program product or article of manufacture according to the invention. As such, the terms "article of
  • FIG. 2 schematically illustrates a typical distributed computer system 200 using a network 202 to connect client computers 102 to server computers 206.
  • a typical combination of resources may include a network 202 comprising the Internet, LANs (local area networks), WANs (wide area networks), SNA (systems network architecture) networks, or the like, clients 102 that are personal computers or workstations, and servers 206 that are personal computers, workstations,
  • minicomputers or mainframes (as set forth in FIG. 1).
  • a network 202 such as the Internet connects clients 102 to server computers 206.
  • Network 202 may utilize ethernet, coaxial cable, wireless communications, radio frequency (RF), etc. to connect and provide the communication between clients 102 and servers 206.
  • Clients 102 may execute a client application or web browser and communicate with server computers 206 executing web servers 210.
  • Such a web browser is typically a program such as MICROSOFT INTERNET EXPLORERTM, MOZILLA FIREFOXTM, OPERATM, APPLE SAFARITM, etc.
  • the software executing on clients 102 may be downloaded from server computer 206 to client computers 102 and installed as a plug in or ACTIVEXTM control of a web browser.
  • clients 102 may utilize ACTIVEXTM components/component object model (COM) or distributed COM (DCOM) components to provide a user interface on a display of client 102.
  • the web server 210 is typically a program such as MICROSOFT'S INTERNENT INFORMATION SERVERTM.
  • Web server 210 may host an Active Server Page (ASP) or Internet Server Application Programming Interface (ISAPI) application 212, which may be executing scripts.
  • the scripts invoke objects that execute business logic (referred to as business objects).
  • the business objects then manipulate data in database 216 through a database management system (DBMS) 214.
  • database 216 may be part of, or connected directly to, client 102 instead of communicating/obtaining the information from database 216 across network 202.
  • DBMS database management system
  • DBMS database management system
  • database 216 may be part of, or connected directly to, client 102 instead of communicating/obtaining the information from database 216 across network 202.
  • COM component object model
  • the scripts executing on web server 210 (and/or application 212) invoke COM objects that implement the business logic.
  • server 206 may utilize MICROSOFT'STM Transaction Server (MTS) to access required data stored in database 216 via an interface such as ADO (Active Data Objects), OLE DB (Object Linking and Embedding DataBase), or ODBC (Open DataBase Connectivity).
  • MTS Transaction Server
  • these components 200-216 all comprise logic and/or data that is embodied in/or retrievable from device, medium, signal, or carrier, e.g., a data storage device, a data communications device, a remote computer or device coupled to the computer via a network or via another data communications device, etc.
  • this logic and/or data when read, executed, and/or interpreted, results in the steps necessary to implement and/or use the present invention being performed.
  • computers 102 and 206 may include thin client devices with limited or full processing capabilities, portable devices such as cell phones, notebook computers, pocket computers, multi-touch devices, and/or any other devices with suitable processing, communication, and input/output capability.
  • Embodiments of the invention are implemented as a software application/3D contriver tool on a client 102 or server computer 206.
  • the client 102 or server computer 206 may comprise a thin client device or a portable device that has a multi-touch-based display.
  • the 3D contriver tool for multi-touch devices can be grouped in different clusters of functionality (that are described in the present application and/or the related applications identified and cross referenced above) as follows:
  • Embodiments of the invention (e.g., system) will provide a single tool that permits multiple dynamic modeling operations and navigation within the same context without requiring complex gestures or modes.
  • FIG. 3 illustrates a visual representation for a grid system (on a multi-touch device) that controls which gestures are either captured as modeling operations or navigational operations, specifically tumbling/orbiting.
  • the user has activated the modeling tool 302. Once the modeling tool 302 has been selected, the system displays a grid 300 composed of three specific regions.
  • the center region 304 represents an area that will generate geometry if a touch event is detected (see below for more detail regarding modeling operations).
  • the first outer region 306 represents an area that will either trigger a re- stocking operation (re -brushing the geometry after the initial creation) when a form is active or trigger a tumbling/orbit navigation if no form is active (see below for more details on re-stroking modeling operations).
  • the second outer region/fall-off grid 308 represents an area that will always trigger a tumbling/orbit navigation if a touch event is detected. Any other touch event detected outside of the fall-off grid 308 will also trigger tumbling/orbit navigation.
  • Embodiments of the invention perform the desired operation based on where the touch event commences and not where the gesture following the touch event proceeds. Thus, merely by commencing a touch event/gesture at a particular location with respect to the grid 300, a particular operation is performed.
  • the FIGs. and description that follow illustrate examples of the different operations that may be performed based on the grid 300.
  • FIGs. 4A and 4B illustrate a tumbling/orbiting operation based on a touch event occurring in region 308 of the grid 300.
  • FIGs. 4A and 4B further illustrate how the grid system adapts itself based on the viewing angle.
  • the general idea is to always display the optimal grid representation, derived from the current viewing angle. Accordingly, depending on how the user is viewing the grid 300, the grid may be flipped and place the user in an optimal viewing position/angle.
  • the current viewing angle may be determined as set forth in a dominant plane as described in copending patent application 13/085,195, which is incorporated by reference herein.
  • embodiments of the invention dynamically switch to one of the dominant planes: XY, XZ, YZ and update the graphical representation of the grid accordingly.
  • FIG. 4A shows the final state of the grid system while the modeling tool 302 is active.
  • the captured gesture 402A occurs entirely outside of the grid 300 (i.e., in region 308), thus invoking a tumble/orbit.
  • the captured gesture 402A begins and ends in area 308 and not within region 304 and/or 306.
  • the resulting viewing angle determines that the XZ plane is dominant and all modeling operations in region 304 or any re-stroking operation in regions 304-306 will be projected to the XY plane.
  • FIG. 4B shows the final state of the grid system while the modeling tool 302 is active.
  • the captured gesture 402B begins outside of the grid 300 (i.e., in region 308) thus invoking a tumble/orbit.
  • the captured gesture begins outside of the grid system in region 308 but proceeds into region 306.
  • the resulting viewing angle determines that the YZ plane is dominant and all modeling operations in region 304 or any re-stroking operation in regions 304-306 will be projected to the YZ plane.
  • embodiments of the invention evaluate the action that is to be executed based on where the touch event begins/commences rather than where the touch event proceeds or ends. Accordingly, in FIGs. 4A and 4B, since the touch event/gesture 402 begins in area 308, a navigation (e.g., tumbling/orbiting) operation is performed.
  • a navigation e.g., tumbling/orbiting
  • FIG. 5 illustrates an exemplary modeling operation that dynamically creates a 3D form based on a modeling operation performed on empty space from within region 304.
  • the user has activated the modeling tool 302, and the system displays an XY grid, determined by the current viewing angle.
  • the user starts brushing from a position inside the modeling grid 304.
  • the system dynamically creates a 3D form 504.
  • the form shaping is interactive and updates the form 504 every time it samples the gesture 502.
  • Such form shaping is performed dynamically in real-time as the user performs the stroke/gesture. Accordingly, as the user is moving a finger, the model is changing.
  • prior art users were required to draw a curve, select the drawn curve, and select a profile. Thereafter, the user would perform a sweep/jigging operation/process.
  • Such a sweep operation is not dynamically created as the user inputs a gesture but instead is based on an already drawn curve that is selected by the user.
  • the system finishes the shaping of the 3D form 504.
  • the user can then tumble/orbit (e.g., as described above with respect to FIGs. 4A and 4B) or re-stroke the 3D form 504 (e.g., as described below).
  • a creation/modeling operation is performed based on the user's gesture 502.
  • a 3D form 504 is dynamically created and conforms to the shape of the gesture/stroke 502.
  • the user did not need to select a creation operation (e.g., from a menu or otherwise). Instead, the user simply began the gesture 502 within region 304.
  • a 3D form 504 is displayed on the grid 300 and is dynamically updated to conform to the stroke 506 while the stroke 506 is drawn.
  • the grid system 300 of the invention enables the user to perform a desired operation merely by beginning a gesture in a particular area/region of the grid system 300.
  • a pair of curves may be created and used to produce a generalized tube surface that interpolates both curves simultaneously.
  • Embodiments of the invention may utilize a Catmull-Clark subdivision surface as the tube surface. With such a subdivision surface, the problem reduces to building an appropriate base mesh whose limit surface shall be the tube surface that interpolates the given curves.
  • Embodiments of the invention assume that such curves are already in a form ready to use and that the curves have compatible lengths, orientation, and structure. Further, both curves may be required to be uniform cubic b-splines that have the same number of spans. Having such a requirement may be necessary because the CV (control vertex) hulls of the curves may be used to guide the building of the base mesh.
  • the first step is that of building the base mesh that encompasses two polygonal complexes so that the final subdivision surface will interpolate both curves at once (e.g., see [5] and [6] for details regarding polygonal complexes).
  • the polygonal complex for one curve, embodiments of the invention build a section of a "polygonal complex" that interpolates a CV of the curve Co as:
  • FIG. 6A illustrates a "polygonal complex" in accordance with one or more embodiments of the invention.
  • FIG. 6A can be constructed such that:
  • FIG. 6B illustrates both sides of "polygonal complex" in accordance with one or more embodiments of the invention. From FIG. 6B (not drawn to scale), it is easy to see that: [0085] If the base mesh profile is to be a regular hexagon, the sides must have the same length. In particular,
  • to-nio is the hypotenuse of Am 0 t 0 p 0 , which means:
  • hexagonal profile need not be regular. Using equations (7) and (12), one can vary o and a, within limits, to obtain variations of the shape of the tube. The limits on the length a is
  • the profile turns into a 4-sided profile, as described below.
  • the polygonal complexes for both sides need not be symmetric: the restriction
  • any reasonable values for oo and oi can be chosen to create a lopsided 6-sided profile.
  • the 4-sided profile need not be a square.
  • the point p can be located anywhere between Co and ci, and then o t and Ob can be different, even.
  • this construction method can only yield a rhombic profile, and is not able to create a rectangular profile.
  • a rectangular profile (where the angles at mo, mi, t, and b of the profile are 90°) can be created - but the derivation may have many variations depending on the constraints at hand.
  • the CV hull of a cubic b- spline can be used as the base polygon for a Catmull-Clark subdivision curve.
  • the resulting subdivision curve may also fail to interpolate the endpoints of the input curves.
  • Embodiments of the invention may apply the techniques described in [7] to modify the CV hull of the input open curves.
  • Four techniques may be used to modify the end CVs of the cubic b-spline curve (e.g., see [7]).
  • embodiments of the invention utilize a method that modifies the last two (2) CVs on either end of the hull to satisfy the so called "Bezier end-constraint":
  • a ' C + 6(A - B) (17) where A, B, and C are the last three (3) original CVs of the one end of the curve, and A ' and B ' are the modified CVs. Subsequently, the modified hulls are used to generate the polygonal complexes and base mesh, as described above.
  • FIGs. 7A-7D illustrate the generation of a base mesh in accordance with one or more embodiments of the invention.
  • the user begins the gesture at point 702 from within grid region 304. Since the user is within grid region 304, a creation operation is performed. As the user drags his finger, the system dynamically creates the base mesh 704.
  • the shape of the base mesh 704 conforms to, and is consistent with, the user's gesture 706.
  • the base mesh is continuously and dynamically created in real-time as the gesture 706 is input by the user and sampled.
  • the CV hulls of the curves i.e., dynamically created based on the user's gesture 706 in real- time, are used to guide the building of the base mesh 704.
  • FIGs. 8A-8D illustrate a subdivision surface that is created dynamically consistent with a base mesh.
  • the user begins the gesture 806 at point 802 and the system dynamically creates the
  • FIGs. 8C and 8D further illustrate the dynamic creation of the subdivision surface 804 in real-time as the gesture is input by the user.
  • the interpolation to create the subdivision surface is performed in real time.
  • the system utilizes the base mesh and as the gesture is input, the system determines the faces that should be added/interpolated based on the base mesh.
  • Such interpolation and 3D subdivision surface creation is performed dynamically as the gesture is input as illustrated in FIGs. 8A-8D.
  • FIG. 9A illustrates an exemplary continued user interaction with the modeling tool 302 and 3D form of FIG. 5(A).
  • the user starts re-stroking from a position 902 A inside the modeling grid 306.
  • the system dynamically reshapes the 3D form 504.
  • the form 504 re-shaping is dynamic, interactive, and updates the form 504 every time it samples the gesture (i.e., dynamically in real-time).
  • the re-stroking modifies the 3D form 504 in relationship to the current XY grid.
  • the system finishes re-shaping the 3D form 504 and the user can then either tumble/orbit or re-stroke the 3D form 504.
  • FIGs. 9B-9E illustrate dynamic re-stroking of the subdivision surface in accordance with one or more embodiments of the invention.
  • the re- stroking commences at point 902B within region 306 (while a geometry/form 504 is active). Accordingly, the system performs a re-stroking operation rather than a tumbling operation.
  • the progression through FIGs. 9B-9E illustrate how the reshaping of form 504 occurs in real-time dynamically as the gesture 904B is input by the user.
  • the surface is re- interpolating the curves (created from the gesture 904B) based on the base mesh corresponding to such curves.
  • Such interpolation and display of the modified form 504 is performed in real-time dynamically. In this regard, the user is not required to select a particular curve, profile or otherwise before the geometric form 504 is modified.
  • FIG. 9F the user continues interacting with the modeling tool 302 and the model of FIG. 9A.
  • the image shows the final state of the grid system 300 while the modeling tool is active 302.
  • the captured gesture 904F occurs/commences at point 902F outside of the grid system 300 (i.e., in region 308) thus invoking a tumble/orbit.
  • the resulting viewing angle (based on the gesture 904F that stops at point 906F) determines that the YZ plane is dominant and all re-stroking operation will be projected to the YZ plane. Re-Stroking in Different Plane Interactions
  • FIG. 9G illustrates an example of a user continuing to interact using a modeling tool 302 with the model form of FIG. 9F in a different plane.
  • the user starts re-stroking from a position 902G inside the first outer grid 306.
  • the system dynamically re-shapes the 3D form 504.
  • the form re-shaping is interactive, dynamic, and updates the form 504 every time the gesture 904G is sampled.
  • the re-stroking modifies the 3D form 504 in relationship to the current YZ grid. Once the user has finished the re-stroking gesture at point 906G describing the path 904G, the system finishes the re-shaping of the 3D form 504 and the user can then either tumble/orbit or re-stroke the 3D form 504.
  • FIG. 10 illustrates an example of a user creating a 3D solid form using two fingers simultaneously.
  • the user has activated the modeling tool 302, and in response, the system displays an XY grid determined by the current viewing angle.
  • the user starts brushing from two positions 1002 A and 1002B inside the modeling grid 304. This can be done using two hands or two fingers on the same hand.
  • the system dynamically creates a 3D form 1006.
  • the form shaping is interactive and updates the form every time it samples the gesture.
  • the modeling tool and interaction described above may also be used to add shapes to newly created 3D forms.
  • a tool may be used to modify and add 3D geometry to an already existing form (e.g., based on direction).
  • FIGs. 11A and 1 IB illustrate a user brush modeling from a face to add geometry to the face in accordance with one or more embodiments of the invention.
  • the system In response to the user activating the modeling tool 302, the system displays an XY grid determined by the current viewing angle. [0116]
  • the user starts brushing from a position on the existing 3D form 1102.
  • the system dynamically adds to the existing 3D form 1100.
  • the form shaping is interactive and updates the form 1100 every time the gesture 1104 is sampled.
  • the additional shape 1106 added to the 3D form 1100 is relative to the current XY grid. In other words, based on the view direction and the dynamic stroking 1104 (from 1102 to 1108), an extrusion 1106 comes out of the face 1100.
  • the model 1100 is dynamically changing.
  • the system finishes shaping the 3D form 1100 (i.e., by adding geometry 1106).
  • the user can then either tumble/orbit or re-stroke the 3D form 1100 (e.g., see above for details regarding re-stroke modeling operations).
  • a base mesh may be used/created as the gesture 1104 is performed. As the user is extruding with the finger, the system is calibrating how many additional faces should be interjected.
  • new faces 1106 and a new base mesh are created by the user dynamically as an arbitrary gesture 1104 is drawn.
  • FIGs. 12A-12C illustrate the re-stroking of the model of FIG. 1 IB in the original plane, the tumbling/orbiting operation, and the re-stroking in the new different plane respectively.
  • FIG. 12A the user continues interaction with the modeling tool 302 of FIG. 1 IB.
  • the user starts re-stroking from a position 1202 inside the modeling grid.
  • the system dynamically re- shapes the 3D form 1200.
  • the form re-shaping is interactive and updates the form 1200 every time the gesture 1204 is sampled.
  • the re-stroking modifies the 3D form 1200 relative to the current XY grid.
  • the system finishes the re-shaping of the 3D form 1200.
  • the user can then either tumble/orbit or re-stroke the 3D form 1200.
  • a re-stroking operation is performed rather than an orbit/creation operation.
  • the re-stroking operation is performed with respect to the existing geometry 1200 and serves to modify a face 1208 of the existing geometry 1200.
  • both the geometry 1200 and the underlying grid are orbited and displayed in the new orientation.
  • FIG. 12C the user continues interaction with the modeling tool 302 and has tumbled/orbited thereby updating the plane to the XZ grid as described with respect to FIG. 12B.
  • the user starts re-stroking from a position 1214 inside the first outer grid (i.e., area 306).
  • the system dynamically re-shapes the 3D form 1200.
  • the form re-shaping is interactive and updates the form 1200 every time the gesture 1216 is sampled.
  • the re-stroking modifies the 3D form 1200 in relationship to the current XZ grid.
  • the system finishes re-shaping the 3D form 1200.
  • the user can then either tumble/orbit or re- stroke the 3D form 1200.
  • FIG. 13 illustrates the user continuing to interact with the modeling tool 302 and the geometry of FIG. 12C.
  • the user strokes downwards from a given visual scale grip 1302.
  • the system dynamically scales the 3D form 1200.
  • the form scaling is interactive and updates the form 1200 every time the gesture 1304 is sampled.
  • the system finishes scaling the 3D form 1200.
  • the user can then either tumble/orbit or re-stroke the 3D form 1200.
  • the scaling provides a visual affordance that scales a 3D form 1200 in the manner desired.
  • the scale operation is remapped to all of the connected faces (i.e., the faces connected to the face that is being scaled) and all of the connected faces are scaled/updated based on the scaling operation.
  • FIGs. 14A-C illustrate the operation of bridging two faces of a 3D form in accordance with one or more embodiments of the invention.
  • the user To utilize the bridging operation and to create a bridge, the user must have the modeling tool 302 active. The user taps a given position 1402 on the 3D form 1400. The system highlights a section of the form 1400 under the original tap 1402. Such highlighting may display the section in a different color, may highlight the section, or display the section in a visually distinguishable manner from the remainder of the form 1400. The user then taps again on another position 1404. Immediately after the system captures the second position, a connecting shape 1406 is generated from the inverted region and is appended to the 3D form 1400.
  • a bridging operation is dynamically performed. Further, such a bridging operation is performed using the same base 3D contriver tool that is used to perform all of the other operations described herein.
  • FIG. 15 illustrates the restroking of the bridge of FIG. 14C in accordance with one or more embodiments of the invention.
  • the user continues interacting with the modeling tool 302.
  • the user has also tumbled/orbited the 3D form 1406, thereby updating the plane to the XZ grid.
  • the user starts re-stroking from a position 1502 inside the first outer grid (i.e., in area 306) thereby invoking a re-stroking operation.
  • the system dynamically re-shapes the 3D form 1400.
  • the form re-shaping is interactive and updates the form 1400 every time the gesture 1504 is sampled. Further, the re-stroking modifies the 3D form relative to the current XZ grid.
  • FIGs. 16A and 16B illustrate a mirroring operation performed in accordance with one or more embodiments of the invention.
  • the user has activated the modeling tool 302 and the system displays an XY grid determined by the current viewing angle.
  • the user has also activated the symmetry tool 1600.
  • the user starts a brushing gesture from a position 1602A on the existing 3D form 1604.
  • the system dynamically adds to both sides of the original 3D form 1604.
  • a line/plane of symmetry is used to determine where/how the mirroring is performed.
  • Such a line/plane of symmetry may be determined based on a variety of methods (e.g., based on the base mesh, automatically determined by the system, drawn by the user, etc.).
  • the creation/editing of the 3D form 1604 may be driven by a predefined mirror plane or a plane that is perpendicular to the current modeling plane.
  • the gesture begins (i.e., at area 1602A), the starting point 1602B of the corresponding/mirrored face is also displayed (e.g., in a visually distinguishable manner such as highlighting or different coloring).
  • the form shaping is interactive and updates the form every time the gesture 1606 is sampled.
  • the additional shape to the 3D forms 1604 is done relative to the current XY grid.
  • the system finishes shaping the 3D form 1604. The user can then either tumble/orbit or re-stroke the 3D form 1604.
  • embodiments of the invention provide a 3D contriver tool/system for multi-touch devices that enables the creation, orbiting, and modification of 3D forms.
  • a unique set of gestures and transient graphic manipulators are provided that are closely related to a 2D brushing workflow and can build up complex forms with little to no 3D modeling expertise.
  • the tool is tightly integrated with 3D navigation and manages the coordinate system in a way that lets the user create geometry and edit geometry without changing modes or running specific tools/commands.
  • the form can be edited by re-stroking inside the modeling plane limits and in relation to the current active modeling plane;
  • the form can be edited by re-stroking inside the modeling plane limits and in relation to the current active modeling plane; and (c) the end shape can be scaled;
  • the stroking and selection model in the 3D contriver tool are aware of symmetry and can drive creation/editing over a pre-defined mirror plane or a plane that is perpendicular to the current modeling plane.
  • FIG. 17 illustrates the logical flow for using the 3D contriver tool in accordance with one or more embodiments of the invention.
  • the first action 1700 is that of activating the 3D contriver tool (i.e., the modeling tool 302). Once the tool 302 is activated, the multi-touch device detects either a two (2) finger drag 1702, a one (1) finger drag 1704, or a one (1) finger tap followed by a second finger tap 1706.
  • Zone B is also known as area 304
  • zone C is area 306
  • zone D is area 308 of FIG. 3.
  • the different zones 305-308 are used to determine whether a creation, modification, or orbiting operation are performed.
  • a 2-finger drag 1702 occurs in zone B 304, a determination is made regarding whether geometry exists under the touch event at 1708. If no geometry exists under the touch event, a two-finger modeling operation is performed at 1710 (e.g., as described above with respect to FIG. 10).
  • a 1 -finger drag operation 1704 is detected in zone C 306, the system again determines if there is geometry under the touch event at 1726. If geometry exists under the touch event, a re-stroking operation 1728 is to be performed and the system determines if an orbit/tumbling operation has been performed to change the plane from which the 3D geometry was created at 1730. In other words, determination 1730 determines if the re-stroking is to be performed on the same/original plane as which the geometry was originally created. If a tumble/orbit operation has been performed (i.e., the plane on which the 3D geometry is displayed is different), the re- stroking is performed on the different plane at step 1732. If the plane has not been modified, the re-stroking is performed on the original plane at 1734.
  • embodiments of the invention provide a single tool that is displayed with different regions.
  • the single tool provides the ability for the user to perform a variety of operations simply by beginning a touch event (or cursor click event) within a particular region.
  • the tool may be used to navigate/tumble a 3D model, create a 3D geometric form (e.g., on a blank canvas or otherwise), and/or edit an existing 3D geometric form.
  • the operation selected/performed is based on where the touch event begins and not where the gesture associated with the touch event progresses. Nonetheless, once an operation is selected, the operation is based on the user's gesture.
  • A.M. Abbas. A Subdivision Surface Interpolating Arbitrarily-Intersecting Network of Curves under Minimal Constraints. cgi2010.miralab.unige.ch, 0:0-3.

Abstract

A modeling tool is activated in a 3D modeling application executing on a multi-touch device. A visual representation of a grid system tool is displayed in an active modeling plane and has three separate regions that determine the type of operation to be performed. An existing 3D form is displayed on the tool. A starting touch event of a gesture is received over the existing 3D form within one of the regions. As the gesture is received in the computer, the 3D form may be dynamically extended by adding 3D geometry to the 3D form (thereby adding faces to the 3D form). Alternatively, the 3D form may be scaled (i.e., if the starting touch event occurs over a visual scale grip. Alternatively, if the gesture consists of two taps, a bridge may be created joining the two tapped locations.

Description

GESTURES AND TOOLS FOR CREATING AND EDITING SOLID MODELS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is related to the following co-pending and commonly- assigned patent application, which application is incorporated by reference herein:
[0002] United States Patent Application Serial No. 13/085,195 filed on April 12, 2011, entitled "Transform Manipulator Control", by Gregory W. Fowler, Jason Bellenger, and Hans-Frederick Brown, Attorney Docket No. 30566.472-US-01;
[0003] United States Patent Application Serial No. XX/YYY,ZZZ filed on the same date herewith, entitled "THREE DIMENSIONAL CONTRIVER TOOL FOR
MODELING WITH MULTI-TOUCH DEVICES", by Gregory W. Fowler, Jason
Vincent Ma, and Hans-Frederick Brown, Attorney Docket No. 30566.485-US-Ol; and
[0004] United States Patent Application Serial No. XX/YYY,ZZZ filed on the same date herewith, entitled "DYNAMIC CREATION AND MODELING OF SOLID MODELS", by Gregory W. Fowler, Jason Vincent Ma, and Hans-Frederick Brown, Attorney Docket No. 30566.486-US-01.
BACKGROUND OF THE INVENTION
1. Field of the Invention.
[0005] The present invention relates generally to three-dimensional (3D) modeling, and in particular, to a method, apparatus, and article of manufacture for dynamically creating and modeling/editing a solid model (e.g., on a multi-touch device). 2. Description of the Related Art.
[0006] (Note: This application references a number of different publications as indicated throughout the specification by reference numbers enclosed in brackets, e.g., [x]. A list of these different publications ordered according to these reference numbers can be found below in the section entitled "References." Each of these publications is incorporated by reference herein.)
[0007] Many 3D modeling and drawing applications are used in both desktop and multi-touch devices. However, none of the existing multi-touch 3D modeling or drawing applications provide a comprehensive 3D modeling system that take advantage of the multi-touch capabilities available across multiple mobile devices, without interfering with basic 3D navigation or requiring proficiency in the "art" of 3D modeling. Further, none of the prior art modeling systems provide the ability to dynamically create and dynamically modify/edit a 3D solid model (e.g., via user gestures). To better understand the problems and deficiencies of the prior art, a description of prior art modeling applications and activities may be useful.
[0008] Some prior art modeling applications (e.g., the Spaceclaim Engineering™ application), have explored multi-touch interactions in the context of 3D modeling tasks. However, such prior art applications mimic the interaction available via a mouse pointer. These interactions are not tailored for laymen to use without 3D modeling experience. Many of the operations also require two hand interactions that may not be adequate for smaller devices and require more muscle memory. Further, prior art applications involve multiple steps in which 3D solid models are not created dynamically in real-time consistent with the user's gestures.
[0009] In some cases, specific creation tools (e.g., extrude, revolve, offset, etc.) may have been implemented for multi-touch use. However, such creation tools are all static modality tools or commands that require proper selection. The tools are detached from the 3D navigation experience and do not fully take advantage of the multi touch input devices.
[0010] In view of the above, it may be understood that 3D modeling activities and tasks generally imply and require an understanding/mastering of concepts such as coordinate systems, tool operations, tool selection sequence, and validity of selections. Accordingly, what is needed is the capability to easily perform a variety of modeling operations (including creation, modification, and navigation) on a multi- touch input device in a dynamic manner without multiple steps or selection requirements.
SUMMARY OF THE INVENTION
[0011] Embodiments of the invention provide a 3D contriver tool that provides the ability for a user to dynamically (in real-time) create and reshape a 3D geometric form on a multi-touch device based on a finger input gesture. Further, embodiments of the invention introduce new multi-touch gestures and interactions that combine multiple concepts into a simple, predictable workflow that mimics how brushes are used on an empty canvas. By simply touching a designated space, the user can rapidly and dynamically create forms without having to worry about tool sequencing, profiles selection, direction, etc.
[0012] Additionally, once a form is laid down in space, the user can continue to dynamically adjust the form/geometry without having to launch an edit tool or invoke a special mode. By simply re-stroking the form, the system detects a modification operation, automatically switches to that operational mode, and allows the user to dynamically reshape the form.
[0013] Furthermore, embodiments of the invention introduce a "soft" 3D navigation (tumbling) activation/deactivation method that does not require the usage of multi- finger gestures or special modes. This transient navigation consists of tracking multi- touch inputs outside a virtual modeling box/plane that provides a 3D modeling environment that flows naturally without enforcing mode/tool switching or difficult clutch gestures to learn.
[0014] Embodiments of the invention enable a tool to perform the above described functionality. Such a tool may also be "loaded" with more complex geometry to help assemble intricate forms. The tool can automatically recognize connection conditions and blends the "stroked" geometry onto an existing form in a natural manner. In this interaction, the user can also continue adjusting the shape and scaling its outline. Further, when complex geometries are used to assemble sophisticated forms, the original geometries can be pre-defined with a simple rigging system that the user can adjust during the creation interaction.
[0015] All of the above functionality is presented to the user as a single tool that is highly context sensitive and used in a dynamic manner to both create and modify a solid model. The tool exposes all of the above interactions without interfering with 3D navigational activities (e.g., pan, zoom, tumble).
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
[0017] FIG. 1 is an exemplary hardware and software environment used to implement one or more embodiments of the invention;
[0018] FIG. 2 schematically illustrates a typical distributed computer system using a network to connect client computers to server computers in accordance with one or more embodiments of the invention;
[0019] FIG. 3 illustrates a visual representation for a grid system (on a multi-touch device) that controls which gestures are either captured as modeling operations or navigational operations, specifically tumbling/orbiting in accordance with one or more embodiments of the invention;
[0020] FIGs. 4A and 4B illustrate a tumbling/orbiting operation where a grid system adapts itself based on the viewing angle in accordance with one or more embodiments of the invention;
[0021] FIG. 5 illustrates an exemplary modeling operation that creates a 3D form based on a modeling operation performed on empty space from within a region in accordance with one or more embodiments of the invention;
[0022] FIG. 6A illustrates a "polygonal complex" in accordance with one or more embodiments of the invention;
[0023] FIG. 6B illustrates both sides of "polygonal complex" in accordance with one or more embodiments of the invention;
[0024] FIG. 6C illustrates using a square as the profile of the polygonal complex base mesh in accordance with one or more embodiments of the invention;
[0025] FIGs. 7A-7D illustrate the generation of a base mesh in accordance with one or more embodiments of the invention;
[0026] FIGs. 8A-8D illustrate a subdivision surface that is created dynamically consistent with a base mesh in accordance with one or more embodiments of the invention;
[0027] FIG. 9A illustrates an exemplary continued user interaction with the modeling tool and 3D form of FIG. 5(A) in accordance with one or more
embodiments of the invention;
[0028] FIGs. 9B-9E illustrate dynamic re-stroking of the subdivision surface in accordance with one or more embodiments of the invention;
[0029] FIG. 9F illustrates a users continued interaction with the modeling tool and the model of FIG. 9A in accordance with one or more embodiments of the invention;
[0030] FIG. 9G illustrates an example of a user continuing to interact using a modeling tool with the model form of FIG. 9F in a different plane in accordance with one or more embodiments of the invention;
[0031] FIG. 10 illustrates an example of a user creating a 3D solid form using two fingers simultaneously in accordance with one or more embodiments of the invention;
[0032] FIGs. 11 A and 1 IB illustrate a user brush modeling from a face to add geometry to the face in accordance with one or more embodiments of the invention;
[0033] FIGs. 12A-12C illustrate the re-stroking of the model of FIG. 1 IB in the original plane, the tumbling/orbiting operation, and the re-stroking in the new different plane respectively in accordance with one or more embodiments of the invention;
[0034] FIG. 13 illustrates the user continuing to interact with the modeling tool 302 and the geometry of FIG. 12C in accordance with one or more embodiments of the invention;
[0035] FIGs. 14A-C illustrate the operation of bridging two faces of a 3D form in accordance with one or more embodiments of the invention;
[0036] FIG. 15 illustrates the restroking of the bridge of FIG. 14C in accordance with one or more embodiments of the invention;
[0037] FIGs. 16A and 16B illustrate a mirroring operation performed in accordance with one or more embodiments of the invention; and
[0038] FIG. 17 illustrates the logical flow for using the 3D contriver tool in accordance with one or more embodiments of the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0039] In the following description, reference is made to the accompanying drawings which form a part hereof, and which is shown, by way of illustration, several embodiments of the present invention. It is understood that other
embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
Overview
[0040] Embodiments of the invention provide a multi-touch 3D modeling system that is based on the idea of using life-like drawing tools on a blank canvas. A single tool provides the ability to automatically control creating, positioning, editing, scaling, and posing based on the view direction and multi-touch events. All of these operations are provided within the same context and without exiting the tool for 3D navigation operations.
[0041] Accordingly, a user is provided with access to a number of modeling interactions (e.g., creating/editing) that dynamically create base geometry, that can be refined and later dynamically sculpted using 3D modeling tools. With the single tool (referred to herein as a 3D contriver tool), the user can explore new 3D creations without requiring special commands or modes. Such an approach maintains the artistic flow that users appreciate from prior art brushing and stroking systems. Hardware Environment
[0042] FIG. 1 is an exemplary hardware and software environment 100 used to implement one or more embodiments of the invention. The hardware and software environment includes a computer 102 and may include peripherals. Computer 102 may be a user/client computer, server computer, or may be a database computer. The computer 102 comprises a general purpose hardware processor 104A and/or a special purpose hardware processor 104B (hereinafter alternatively collectively referred to as processor 104) and a memory 106, such as random access memory (RAM). The computer 102 may be coupled to and/or integrated with other devices, including input/output (I/O) devices such as a keyboard 114, a cursor control device 116 (e.g., a mouse, a pointing device, pen and tablet, touch screen, multi-touch device, etc.) and a printer 128. In one or more embodiments, computer 102 may be coupled to, or may comprise, a portable or media viewing/listening device 132 (e.g., an MP3 player, iPod™, Nook™, portable digital video player, cellular device, personal digital assistant, etc.).
[0043] In one embodiment, the computer 102 operates by the general purpose processor 104A performing instructions defined by the computer program 110 under control of an operating system 108. The computer program 110 and/or the operating system 108 may be stored in the memory 106 and may interface with the user and/or other devices to accept input and commands and, based on such input and commands and the instructions defined by the computer program 1 10 and operating system 108 to provide output and results. [0044] Output/results may be presented on the display 122 or provided to another device for presentation or further processing or action. In one embodiment, the display 122 comprises a liquid crystal display (LCD) having a plurality of separately addressable liquid crystals. Alternatively, the display 122 may comprise a light emitting diode (LED) display having clusters of red, green and blue diodes driven together to form full-color pixels. Each liquid crystal or pixel of the display 122 changes to an opaque or translucent state to form a part of the image on the display in response to the data or information generated by the processor 104 from the application of the instructions of the computer program 110 and/or operating system 108 to the input and commands. The image may be provided through a graphical user interface (GUI) module 118 A. Although the GUI module 118 A is depicted as a separate module, the instructions performing the GUI functions can be resident or distributed in the operating system 108, the computer program 110, or implemented with special purpose memory and processors.
[0045] In one or more embodiments, the display 122 is integrated with/into the computer 102 and comprises a multi-touch device having a touch sensing surface (e.g., track pod or touch screen) with the ability to recognize the presence of two or more points of contact with the surface. Examples of multi-touch devices include mobile devices (e.g., iPhone™, Nexus S™, Droid™ devices, etc.), tablet computers (e.g., iPad™, HP Touchpad™), portable/handheld game/music/video player/console devices (e.g., iPod Touch™, MP3 players, Nintendo 3DS™, PlayStation Portable™, etc.), touch tables, and walls (e.g., where an image is projected through acrylic and/or glass, and the image is then backlit with LEDs).
[0046] Some or all of the operations performed by the computer 102 according to the computer program 110 instructions may be implemented in a special purpose processor 104B. In this embodiment, the some or all of the computer program 110 instructions may be implemented via firmware instructions stored in a read only memory (ROM), a programmable read only memory (PROM) or flash memory within the special purpose processor 104B or in memory 106. The special purpose processor
104B may also be hardwired through circuit design to perform some or all of the operations to implement the present invention. Further, the special purpose processor
104B may be a hybrid processor, which includes dedicated circuitry for performing a subset of functions, and other circuits for performing more general functions such as responding to computer program instructions. In one embodiment, the special purpose processor is an application specific integrated circuit (ASIC).
[0047] The computer 102 may also implement a compiler 112 which allows an application program 110 written in a programming language such as COBOL, Pascal,
C++, FORTRAN, or other language to be translated into processor 104 readable code.
Alternatively, the compiler 112 may be an interpreter that executes
instructions/source code directly, translates source code into an intermediate representation that is executed, or that executes stored precompiled code. Such source code may be written in a variety of programming languages such as Java™,
Perl™, Basic™, etc. After completion, the application or computer program 110 accesses and manipulates data accepted from I/O devices and stored in the memory 106 of the computer 102 using the relationships and logic that was generated using the compiler 112.
[0048] The computer 102 also optionally comprises an external communication device such as a modem, satellite link, Ethernet card, or other device for accepting input from and providing output to other computers 102.
[0049] In one embodiment, instructions implementing the operating system 108, the computer program 110, and the compiler 112 are tangibly embodied in a non-transient computer-readable medium, e.g., data storage device 120, which could include one or more fixed or removable data storage devices, such as a zip drive, floppy disc drive 124, hard drive, CD-ROM drive, tape drive, etc. Further, the operating system 108 and the computer program 1 10 are comprised of computer program instructions which, when accessed, read and executed by the computer 102, causes the computer 102 to perform the steps necessary to implement and/or use the present invention or to load the program of instructions into a memory, thus creating a special purpose data structure causing the computer to operate as a specially programmed computer executing the method steps described herein. Computer program 110 and/or operating instructions may also be tangibly embodied in memory 106 and/or data communications devices 130, thereby making a computer program product or article of manufacture according to the invention. As such, the terms "article of
manufacture," "program storage device" and "computer program product" as used herein are intended to encompass a computer program accessible from any computer readable device or media. [0050] Of course, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals, and other devices, may be used with the computer 102.
[0051] FIG. 2 schematically illustrates a typical distributed computer system 200 using a network 202 to connect client computers 102 to server computers 206. A typical combination of resources may include a network 202 comprising the Internet, LANs (local area networks), WANs (wide area networks), SNA (systems network architecture) networks, or the like, clients 102 that are personal computers or workstations, and servers 206 that are personal computers, workstations,
minicomputers, or mainframes (as set forth in FIG. 1).
[0052] A network 202 such as the Internet connects clients 102 to server computers 206. Network 202 may utilize ethernet, coaxial cable, wireless communications, radio frequency (RF), etc. to connect and provide the communication between clients 102 and servers 206. Clients 102 may execute a client application or web browser and communicate with server computers 206 executing web servers 210. Such a web browser is typically a program such as MICROSOFT INTERNET EXPLORER™, MOZILLA FIREFOX™, OPERA™, APPLE SAFARI™, etc. Further, the software executing on clients 102 may be downloaded from server computer 206 to client computers 102 and installed as a plug in or ACTIVEX™ control of a web browser. Accordingly, clients 102 may utilize ACTIVEX™ components/component object model (COM) or distributed COM (DCOM) components to provide a user interface on a display of client 102. The web server 210 is typically a program such as MICROSOFT'S INTERNENT INFORMATION SERVER™.
[0053] Web server 210 may host an Active Server Page (ASP) or Internet Server Application Programming Interface (ISAPI) application 212, which may be executing scripts. The scripts invoke objects that execute business logic (referred to as business objects). The business objects then manipulate data in database 216 through a database management system (DBMS) 214. Alternatively, database 216 may be part of, or connected directly to, client 102 instead of communicating/obtaining the information from database 216 across network 202. When a developer encapsulates the business functionality into objects, the system may be referred to as a component object model (COM) system. Accordingly, the scripts executing on web server 210 (and/or application 212) invoke COM objects that implement the business logic. Further, server 206 may utilize MICROSOFT'S™ Transaction Server (MTS) to access required data stored in database 216 via an interface such as ADO (Active Data Objects), OLE DB (Object Linking and Embedding DataBase), or ODBC (Open DataBase Connectivity).
[0054] Generally, these components 200-216 all comprise logic and/or data that is embodied in/or retrievable from device, medium, signal, or carrier, e.g., a data storage device, a data communications device, a remote computer or device coupled to the computer via a network or via another data communications device, etc. Moreover, this logic and/or data, when read, executed, and/or interpreted, results in the steps necessary to implement and/or use the present invention being performed.
[0055] Although the term "user computer", "client computer", and/or "server computer" is referred to herein, it is understood that such computers 102 and 206 may include thin client devices with limited or full processing capabilities, portable devices such as cell phones, notebook computers, pocket computers, multi-touch devices, and/or any other devices with suitable processing, communication, and input/output capability.
[0056] Of course, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals, and other devices, may be used with computers 102 and 206.
Software Embodiment Overview
[0057] Embodiments of the invention are implemented as a software application/3D contriver tool on a client 102 or server computer 206. Further, as described above, the client 102 or server computer 206 may comprise a thin client device or a portable device that has a multi-touch-based display.
[0058] The 3D contriver tool for multi-touch devices can be grouped in different clusters of functionality (that are described in the present application and/or the related applications identified and cross referenced above) as follows:
1. Modeling Space and Tool Creator
i) Modeling Box & Dominant Plane
ii) Thumbing/Orbiting and Navigation
iii) Empty Space Brush Modeling
iv) Re-Stroking in Original Plane Interactions v) Re-Stroking in Different Plane Interactions
vi) Two Finger Modeling
Modeling from a Face
i) From Face Brush Modeling
ii) Re-Stroking in Original Plane Interactions
iii) Re-Stroking in Different Plane Interactions
iv) Scaling
v) Two Face Bridging
vi) Bridging Re- Stroking
vii) Mirroring
Modeling Space and Tool Creator
[0059] Embodiments of the invention (e.g., system) will provide a single tool that permits multiple dynamic modeling operations and navigation within the same context without requiring complex gestures or modes.
Modeling Box and Dominant Plane
[0060] FIG. 3 illustrates a visual representation for a grid system (on a multi-touch device) that controls which gestures are either captured as modeling operations or navigational operations, specifically tumbling/orbiting. In FIG. 3, the user has activated the modeling tool 302. Once the modeling tool 302 has been selected, the system displays a grid 300 composed of three specific regions.
[0061] The center region 304 represents an area that will generate geometry if a touch event is detected (see below for more detail regarding modeling operations).
[0062] The first outer region 306 represents an area that will either trigger a re- stocking operation (re -brushing the geometry after the initial creation) when a form is active or trigger a tumbling/orbit navigation if no form is active (see below for more details on re-stroking modeling operations).
[0063] The second outer region/fall-off grid 308 represents an area that will always trigger a tumbling/orbit navigation if a touch event is detected. Any other touch event detected outside of the fall-off grid 308 will also trigger tumbling/orbit navigation.
[0064] Embodiments of the invention perform the desired operation based on where the touch event commences and not where the gesture following the touch event proceeds. Thus, merely by commencing a touch event/gesture at a particular location with respect to the grid 300, a particular operation is performed. The FIGs. and description that follow illustrate examples of the different operations that may be performed based on the grid 300.
Tumbling/Orbiting and Navigation
[0065] FIGs. 4A and 4B illustrate a tumbling/orbiting operation based on a touch event occurring in region 308 of the grid 300. FIGs. 4A and 4B further illustrate how the grid system adapts itself based on the viewing angle. The general idea is to always display the optimal grid representation, derived from the current viewing angle. Accordingly, depending on how the user is viewing the grid 300, the grid may be flipped and place the user in an optimal viewing position/angle. The current viewing angle may be determined as set forth in a dominant plane as described in copending patent application 13/085,195, which is incorporated by reference herein.
[0066] To respond to the user tumbling/orbiting in space, embodiments of the invention dynamically switch to one of the dominant planes: XY, XZ, YZ and update the graphical representation of the grid accordingly.
[0067] FIG. 4A shows the final state of the grid system while the modeling tool 302 is active. The captured gesture 402A occurs entirely outside of the grid 300 (i.e., in region 308), thus invoking a tumble/orbit. In other words, the captured gesture 402A begins and ends in area 308 and not within region 304 and/or 306. The resulting viewing angle determines that the XZ plane is dominant and all modeling operations in region 304 or any re-stroking operation in regions 304-306 will be projected to the XY plane.
[0068] FIG. 4B shows the final state of the grid system while the modeling tool 302 is active. The captured gesture 402B begins outside of the grid 300 (i.e., in region 308) thus invoking a tumble/orbit. In other words, the captured gesture begins outside of the grid system in region 308 but proceeds into region 306. The resulting viewing angle determines that the YZ plane is dominant and all modeling operations in region 304 or any re-stroking operation in regions 304-306 will be projected to the YZ plane.
[0069] In view of FIGs. 4A and 4B, it may be noted that embodiments of the invention evaluate the action that is to be executed based on where the touch event begins/commences rather than where the touch event proceeds or ends. Accordingly, in FIGs. 4A and 4B, since the touch event/gesture 402 begins in area 308, a navigation (e.g., tumbling/orbiting) operation is performed.
Empty Space Brush Modeling Interface
[0070] FIG. 5 illustrates an exemplary modeling operation that dynamically creates a 3D form based on a modeling operation performed on empty space from within region 304. In FIG. 5, the user has activated the modeling tool 302, and the system displays an XY grid, determined by the current viewing angle.
[0071] The user starts brushing from a position inside the modeling grid 304. As the user drags a finger along any path 502, the system dynamically creates a 3D form 504. The form shaping is interactive and updates the form 504 every time it samples the gesture 502. Such form shaping is performed dynamically in real-time as the user performs the stroke/gesture. Accordingly, as the user is moving a finger, the model is changing. In the prior art, the ability to dynamically create a model in such a manner was not provided. Instead, prior art users were required to draw a curve, select the drawn curve, and select a profile. Thereafter, the user would perform a sweep/jigging operation/process. Such a sweep operation is not dynamically created as the user inputs a gesture but instead is based on an already drawn curve that is selected by the user.
[0072] Once the user finishes brushing (i.e., at 506), describing the path 502, the system finishes the shaping of the 3D form 504. The user can then tumble/orbit (e.g., as described above with respect to FIGs. 4A and 4B) or re-stroke the 3D form 504 (e.g., as described below).
[0073] Thus, since the gesture 502 commenced in region 304, a creation/modeling operation is performed based on the user's gesture 502. As illustrated, a 3D form 504 is dynamically created and conforms to the shape of the gesture/stroke 502. To perform the modeling creation operation, the user did not need to select a creation operation (e.g., from a menu or otherwise). Instead, the user simply began the gesture 502 within region 304. In response, a 3D form 504 is displayed on the grid 300 and is dynamically updated to conform to the stroke 506 while the stroke 506 is drawn.
Again, since the gesture/stroke 502 began inside of region 304, it doesn't matter if the stroke 502 proceeds outside of region 304. Instead, what enables the modeling operation is where the stroke 502 commences.
[0074] Accordingly, the grid system 300 of the invention enables the user to perform a desired operation merely by beginning a gesture in a particular area/region of the grid system 300.
Empty Space Brush Modeling Implementation Details
[0075] To create the 3D form in real time, a pair of curves may be created and used to produce a generalized tube surface that interpolates both curves simultaneously.
Embodiments of the invention may utilize a Catmull-Clark subdivision surface as the tube surface. With such a subdivision surface, the problem reduces to building an appropriate base mesh whose limit surface shall be the tube surface that interpolates the given curves.
[0076] Embodiments of the invention assume that such curves are already in a form ready to use and that the curves have compatible lengths, orientation, and structure. Further, both curves may be required to be uniform cubic b-splines that have the same number of spans. Having such a requirement may be necessary because the CV (control vertex) hulls of the curves may be used to guide the building of the base mesh.
[0077] Many works of curve-interpolating subdivision surfaces can be found in academia. Generally, they fall into the following categories: those that build the base mesh specifically for the curve to be interpolated ([1]), [8], [5], [6]), those that tag edges of an existing base mesh, and have enhanced rules of subdivision ([2]), or those that provide entirely new schemes ([3]).
[0078] In view of the above, the first step is that of building the base mesh that encompasses two polygonal complexes so that the final subdivision surface will interpolate both curves at once (e.g., see [5] and [6] for details regarding polygonal complexes). To configure the polygonal complex for one curve, embodiments of the invention build a section of a "polygonal complex" that interpolates a CV of the curve Co as:
[0079] In other words, one must determine where to put the points t0 and bo in order to place mo. FIG. 6A illustrates a "polygonal complex" in accordance with one or more embodiments of the invention. FIG. 6A can be constructed such that:
(m0 -c1 (c0 - c1) = l (2)
[0080] Let o =
Figure imgf000024_0001
expression of o may be found in terms of a. o
Figure imgf000024_0002
- c n\\ + a )t 0
[0081] Accordingly, 0 - c0 needs to be expressed in terms of a. Utilizing equation (1) above:
Figure imgf000024_0003
[0082] By construction, the vector nio-co is parallel c . Also, the restriction that to and bo are symmetric across c implies that the projections of co-to and co-bo onto c are equivalent (to co-po). Therefore, to proceed onwards from (5), one only needs to consider the aforementioned projections: 1 1 1
^ (c0 - t0) + ^- (co - ¾)
(6)
Figure imgf000025_0001
[0083] Thus, putting (6) and (4) together provides the following desired result: o =— α ί&Υί θ (?) 2
[0084] Making the profile a regular hexagon is the next step. Having figured out the value of o in terms of the single variable a, one may proceed to express the value of a in terms of the only known values of Co and Ci . FIG. 6B illustrates both sides of "polygonal complex" in accordance with one or more embodiments of the invention. From FIG. 6B (not drawn to scale), it is easy to see that:
Figure imgf000025_0002
[0085] If the base mesh profile is to be a regular hexagon, the sides must have the same length. In particular,
Figure imgf000025_0003
[0086] Referring to FIG. 6A, to-nio is the hypotenuse of Am0t0p0 , which means:
(10) sin Θ [0087] Therefore, if you account for (7) and apply some basic trigonometry: 3α tan θ
2 sin θ
(11)
3a
2 cos Θ
[0088] If you substitute equation (11) into equation (8) and apply the knowledge that in a regular hexagon, Θ = 60° => cos Θ =— , the equation for a can be had:
(12)
- + 2
2 cos 6*
Figure imgf000026_0001
[0089] Thus, both o and a are now expressed purely by the distance between co and ci, which are the only inputs with which processing began. It is now a trivial exercise to construct the points to, bo, and mo for the CV Co, and similarly for a (e.g., see FIG. 6B).
[0090] It may be noted that the hexagonal profile need not be regular. Using equations (7) and (12), one can vary o and a, within limits, to obtain variations of the shape of the tube. The limits on the length a is
Figure imgf000026_0002
the profile turns into a 4-sided profile, as described below. Furthermore, the polygonal complexes for both sides need not be symmetric: the restriction ||c0 -
Figure imgf000026_0003
- c01| . Then, any reasonable values for oo and oi can be chosen to create a lopsided 6-sided profile.
Also, it is trivial to extend this construction method to (regular) 2N-sided polygons N>3.
[0091] One can also imagine using a square as the profile of the polygonal complex base mesh as illustrated in FIG. 6C. In this case, to and tj are now the same point (same for b's and p's) . In FIG. 6C, they are now t, b, and p respectively. Now, because Θ = 45° = tan Θ = 1 , equation (7) simplifies to: o = -a (14) 2
[0092] By inspection, p is now the midpoint between Co and ci, so:
[0093] Again, just like the case of the hexagonal profile, the 4-sided profile need not be a square. The point p can be located anywhere between Co and ci, and then ot and Ob can be different, even. However, this construction method can only yield a rhombic profile, and is not able to create a rectangular profile. A rectangular profile (where the angles at mo, mi, t, and b of the profile are 90°) can be created - but the derivation may have many variations depending on the constraints at hand.
[0094] To ensure end-point interpolation, various additional processing may be required. If both curve were closed, then simply creating the base mesh via polygonal complexes is sufficient to create a subdivision surface that interpolates the two closed curves. However, if the curves were open, then there needs to be extra work done to the polygonal complexes such that the subdivision surface would interpolate the open ends of the curves. [0095] Some literature on the subject of curve-interpolating subdivision surfaces gloss over the case of the curve being open, instead choosing to focus on interpolating either closed boundary curves or networks of curves (the open ends of all curves are coincident with at least one other curve) (e.g., see [4] which describes subdivision surfaces interpolating open curves but is limited to Doo-Sabin [quadratic] surfaces).
[0096] In one or more embodiments of the invention, the CV hull of a cubic b- spline can be used as the base polygon for a Catmull-Clark subdivision curve.
However, if the b-spline has full-multiplicity knots for the end CVs, the respective Catmull-Clark subdivision curve will ignore this and the resulting subdivision curve will not interpolate the end CVs as expected. Therefore, if one were to use the CVs of an open curve to generate the polygonal complex as described above, the resulting subdivision surface may also fail to interpolate the endpoints of the input curves.
[0097] Embodiments of the invention may apply the techniques described in [7] to modify the CV hull of the input open curves. Four techniques may be used to modify the end CVs of the cubic b-spline curve (e.g., see [7]). After experimentation, embodiments of the invention utilize a method that modifies the last two (2) CVs on either end of the hull to satisfy the so called "Bezier end-constraint":
B< = ^ (16)
A ' = C + 6(A - B) (17) where A, B, and C are the last three (3) original CVs of the one end of the curve, and A ' and B ' are the modified CVs. Subsequently, the modified hulls are used to generate the polygonal complexes and base mesh, as described above.
[0098] In view of the above, embodiments of the invention dynamically create the curves (as such curves are generated via user gesture) that are used to dynamically create a base mesh which in turn are dynamically used to create a tube surface that interpolates the given curves. Thus, the first step in such a process is to build a base mesh from the curves. FIGs. 7A-7D illustrate the generation of a base mesh in accordance with one or more embodiments of the invention. In FIG. 7A, the user begins the gesture at point 702 from within grid region 304. Since the user is within grid region 304, a creation operation is performed. As the user drags his finger, the system dynamically creates the base mesh 704. The shape of the base mesh 704 conforms to, and is consistent with, the user's gesture 706. As illustrated in FIGs. 7B, 7C, and 7D, the base mesh is continuously and dynamically created in real-time as the gesture 706 is input by the user and sampled. As illustrated in FIGs. 7A-7D, the CV hulls of the curves (i.e., dynamically created based on the user's gesture 706 in real- time), are used to guide the building of the base mesh 704.
[0099] Once the base mesh is created, the limit surface of the base mesh is used as the tube surface that interpolates the given curves. As used herein, the limit surface is referred to as the sub-division surface. FIGs. 8A-8D illustrate a subdivision surface that is created dynamically consistent with a base mesh. In FIGs. 8A-8B, the user begins the gesture 806 at point 802 and the system dynamically creates the
subdivision 3D surface 804 in real time (based on a base mesh). FIGs. 8C and 8D further illustrate the dynamic creation of the subdivision surface 804 in real-time as the gesture is input by the user.
[0100] In view of the above, it may be seen that the interpolation to create the subdivision surface is performed in real time. In this regard, the system utilizes the base mesh and as the gesture is input, the system determines the faces that should be added/interpolated based on the base mesh. Such interpolation and 3D subdivision surface creation is performed dynamically as the gesture is input as illustrated in FIGs. 8A-8D.
Re-Stroking in Original Plane Interactions
[0101] FIG. 9A illustrates an exemplary continued user interaction with the modeling tool 302 and 3D form of FIG. 5(A).
[0102] The user starts re-stroking from a position 902 A inside the modeling grid 306. As the user drags his/her finger along any path 904 A, the system dynamically reshapes the 3D form 504. The form 504 re-shaping is dynamic, interactive, and updates the form 504 every time it samples the gesture (i.e., dynamically in real-time).
[0103] In FIG. 9A, the re-stroking modifies the 3D form 504 in relationship to the current XY grid. Once the user finishes re-stroking (i.e., at 906A) thereby describing the path 904A, the system finishes re-shaping the 3D form 504 and the user can then either tumble/orbit or re-stroke the 3D form 504.
[0104] Accordingly, as described above, since the operation/gesture 904 A is commenced at a location 902A within area/region 306, a re-stroking operation is performed (i.e., since form 504 is active). If the operation were conducted outside of region 306, (i.e., in region 308), an orbit/tumbling operation would be conducted.
[0105] FIGs. 9B-9E illustrate dynamic re-stroking of the subdivision surface in accordance with one or more embodiments of the invention. As illustrated, the re- stroking commences at point 902B within region 306 (while a geometry/form 504 is active). Accordingly, the system performs a re-stroking operation rather than a tumbling operation. The progression through FIGs. 9B-9E illustrate how the reshaping of form 504 occurs in real-time dynamically as the gesture 904B is input by the user. Again, similar to the initial creation of the form 504, the surface is re- interpolating the curves (created from the gesture 904B) based on the base mesh corresponding to such curves. Such interpolation and display of the modified form 504 is performed in real-time dynamically. In this regard, the user is not required to select a particular curve, profile or otherwise before the geometric form 504 is modified.
[0106] In FIG. 9F, the user continues interacting with the modeling tool 302 and the model of FIG. 9A. The image shows the final state of the grid system 300 while the modeling tool is active 302. The captured gesture 904F occurs/commences at point 902F outside of the grid system 300 (i.e., in region 308) thus invoking a tumble/orbit. The resulting viewing angle (based on the gesture 904F that stops at point 906F) determines that the YZ plane is dominant and all re-stroking operation will be projected to the YZ plane. Re-Stroking in Different Plane Interactions
[0107] FIG. 9G illustrates an example of a user continuing to interact using a modeling tool 302 with the model form of FIG. 9F in a different plane. The user starts re-stroking from a position 902G inside the first outer grid 306. As the user drags a finger along any path 904G, the system dynamically re-shapes the 3D form 504. The form re-shaping is interactive, dynamic, and updates the form 504 every time the gesture 904G is sampled.
[0108] The re-stroking modifies the 3D form 504 in relationship to the current YZ grid. Once the user has finished the re-stroking gesture at point 906G describing the path 904G, the system finishes the re-shaping of the 3D form 504 and the user can then either tumble/orbit or re-stroke the 3D form 504.
[0109] Thus, as described above, since the stroke 904G begins at a point 902G within region 306, a reshaping operation is performed. Further, the operation is performed in the YZ plane due to the rotation/tumbling that was performed as described above with respect to FIG. 9F. In addition, similar to re-stroking on the same plane (described above), the re-stroking operation in a different plane is performed dynamically as the gesture is drawn by the user.
Two Finger Modeling
[0110] FIG. 10 illustrates an example of a user creating a 3D solid form using two fingers simultaneously. The user has activated the modeling tool 302, and in response, the system displays an XY grid determined by the current viewing angle. [0111] The user starts brushing from two positions 1002 A and 1002B inside the modeling grid 304. This can be done using two hands or two fingers on the same hand. As the user drags his/her fingers along any paths 1004A and 1004B, the system dynamically creates a 3D form 1006. The form shaping is interactive and updates the form every time it samples the gesture.
[0112] Once the user has finished brushing at points 1008A and 1008B, thereby describing the paths 1004A and 1004B, the system finishes shaping the 3D form 1006. The user can then either tumble/orbit or re-stroke the 3D form 1006.
[0113] Heuristics may also be in place to differentiate these creation-driven dual touch events from other similar multi-touch gestures like pinching. Pinching may be used in the prior art for zooming and panning.
Modeling from a Face
[0114] The modeling tool and interaction described above may also be used to add shapes to newly created 3D forms. In this regard, a tool may be used to modify and add 3D geometry to an already existing form (e.g., based on direction).
Brush Modeling from a Face
[0115] FIGs. 11A and 1 IB illustrate a user brush modeling from a face to add geometry to the face in accordance with one or more embodiments of the invention.
In response to the user activating the modeling tool 302, the system displays an XY grid determined by the current viewing angle. [0116] The user starts brushing from a position on the existing 3D form 1102. As the user drags his/her finger along any path 1104 (FIG. 1 IB), the system dynamically adds to the existing 3D form 1100. The form shaping is interactive and updates the form 1100 every time the gesture 1104 is sampled. The additional shape 1106 added to the 3D form 1100 is relative to the current XY grid. In other words, based on the view direction and the dynamic stroking 1104 (from 1102 to 1108), an extrusion 1106 comes out of the face 1100. As the user moves his/her finger, the model 1100 is dynamically changing.
[0117] Once the user finishes brushing 1108 describing the path 1104, the system finishes shaping the 3D form 1100 (i.e., by adding geometry 1106). The user can then either tumble/orbit or re-stroke the 3D form 1100 (e.g., see above for details regarding re-stroke modeling operations).
[0118] In the prior art, to modify the geometry of form 1100, the user was required to first select/create/draw a profile, then select a process to create the geometry (e.g., jigging)(e.g., drawing a curve, selecting a geometry, selecting a profile, and then sweeping the curve in a non-dynamic manner). In this regard, the prior art fails to provide a mechanism for dynamically creating and displaying a shape as an arbitrary gesture 1104 is created.
[0119] As described above with respect to the creation of the geometry, a base mesh may be used/created as the gesture 1104 is performed. As the user is extruding with the finger, the system is calibrating how many additional faces should be interjected.
In other words, new faces 1106 and a new base mesh are created by the user dynamically as an arbitrary gesture 1104 is drawn.
Re-Stroking a Model
[0120] FIGs. 12A-12C illustrate the re-stroking of the model of FIG. 1 IB in the original plane, the tumbling/orbiting operation, and the re-stroking in the new different plane respectively.
[0121] In FIG. 12A, the user continues interaction with the modeling tool 302 of FIG. 1 IB. The user starts re-stroking from a position 1202 inside the modeling grid. As the user drags his/her finger along any path 1204, the system dynamically re- shapes the 3D form 1200. The form re-shaping is interactive and updates the form 1200 every time the gesture 1204 is sampled. The re-stroking modifies the 3D form 1200 relative to the current XY grid.
[0122] Once the user finishes re-stroking 1206 describing the path 1204, the system finishes the re-shaping of the 3D form 1200. The user can then either tumble/orbit or re-stroke the 3D form 1200. Thus, similar to that described above, since the user commences the operation from within area 306, a re-stroking operation is performed rather than an orbit/creation operation. The re-stroking operation is performed with respect to the existing geometry 1200 and serves to modify a face 1208 of the existing geometry 1200.
[0123] In FIG. 12B, the user begins the gesture at point 1210 outside of the grid area 306 and hence an orbit operation is performed thereby orbiting the geometry
1200 from the XY plane to the XZ plane. Similar to that described above, both the geometry 1200 and the underlying grid are orbited and displayed in the new orientation.
[0124] In FIG. 12C, the user continues interaction with the modeling tool 302 and has tumbled/orbited thereby updating the plane to the XZ grid as described with respect to FIG. 12B. The user starts re-stroking from a position 1214 inside the first outer grid (i.e., area 306). As the user drags his/her fmger along the path 1216, the system dynamically re-shapes the 3D form 1200. The form re-shaping is interactive and updates the form 1200 every time the gesture 1216 is sampled. The re-stroking modifies the 3D form 1200 in relationship to the current XZ grid.
[0125] Once the user finishes re-stroking 1218 describing the path 1216, the system finishes re-shaping the 3D form 1200. The user can then either tumble/orbit or re- stroke the 3D form 1200.
Scaling
[0126] FIG. 13 illustrates the user continuing to interact with the modeling tool 302 and the geometry of FIG. 12C. The user strokes downwards from a given visual scale grip 1302. As the user drags his/her fmger downwards, the system dynamically scales the 3D form 1200. The form scaling is interactive and updates the form 1200 every time the gesture 1304 is sampled. Once the user finishes stroking downwards describing the path 1304, the system finishes scaling the 3D form 1200. The user can then either tumble/orbit or re-stroke the 3D form 1200. Accordingly, the scaling provides a visual affordance that scales a 3D form 1200 in the manner desired. The scale operation is remapped to all of the connected faces (i.e., the faces connected to the face that is being scaled) and all of the connected faces are scaled/updated based on the scaling operation. Two-Face Bridging
[0127] FIGs. 14A-C illustrate the operation of bridging two faces of a 3D form in accordance with one or more embodiments of the invention.
[0128] To utilize the bridging operation and to create a bridge, the user must have the modeling tool 302 active. The user taps a given position 1402 on the 3D form 1400. The system highlights a section of the form 1400 under the original tap 1402. Such highlighting may display the section in a different color, may highlight the section, or display the section in a visually distinguishable manner from the remainder of the form 1400. The user then taps again on another position 1404. Immediately after the system captures the second position, a connecting shape 1406 is generated from the inverted region and is appended to the 3D form 1400.
[0129] Once the system captures the two tap gestures 1402 and 1404 describing the new appended shape 1406, the user can then either tumble/orbit or re-stroke the new 3D form 1400.
[0130] In view of the above, if a user performs a tap operation, the system will wait for another tap operation. If a second tap is detected, a bridging operation is dynamically performed. Further, such a bridging operation is performed using the same base 3D contriver tool that is used to perform all of the other operations described herein.
Bridge Re-Stroking
[0131] FIG. 15 illustrates the restroking of the bridge of FIG. 14C in accordance with one or more embodiments of the invention. In this example, the user continues interacting with the modeling tool 302. The user has also tumbled/orbited the 3D form 1406, thereby updating the plane to the XZ grid.
[0132] The user starts re-stroking from a position 1502 inside the first outer grid (i.e., in area 306) thereby invoking a re-stroking operation. As the user drags his/her finger along any path 1504, the system dynamically re-shapes the 3D form 1400. The form re-shaping is interactive and updates the form 1400 every time the gesture 1504 is sampled. Further, the re-stroking modifies the 3D form relative to the current XZ grid.
[0133] Once the user finishes re-stroking thereby describing the path 1504, the system finishes re-shaping the 3D form 1400. The user can then either tumble/orbit or re-stroke the 3D form 1400.
Mirroring
[0134] FIGs. 16A and 16B illustrate a mirroring operation performed in accordance with one or more embodiments of the invention. In this example, the user has activated the modeling tool 302 and the system displays an XY grid determined by the current viewing angle. In addition, the user has also activated the symmetry tool 1600.
[0135] The user starts a brushing gesture from a position 1602A on the existing 3D form 1604. As the user drags his/her finger along any path 1606, the system dynamically adds to both sides of the original 3D form 1604. A line/plane of symmetry is used to determine where/how the mirroring is performed. Such a line/plane of symmetry may be determined based on a variety of methods (e.g., based on the base mesh, automatically determined by the system, drawn by the user, etc.). In this regard, the creation/editing of the 3D form 1604 may be driven by a predefined mirror plane or a plane that is perpendicular to the current modeling plane.
[0136] Further, when the gesture begins (i.e., at area 1602A), the starting point 1602B of the corresponding/mirrored face is also displayed (e.g., in a visually distinguishable manner such as highlighting or different coloring). The form shaping is interactive and updates the form every time the gesture 1606 is sampled. The additional shape to the 3D forms 1604 is done relative to the current XY grid. Once the user finishes brushing describing the path 1606, the system finishes shaping the 3D form 1604. The user can then either tumble/orbit or re-stroke the 3D form 1604.
Logical Flow
[0137] In view of the above, embodiments of the invention provide a 3D contriver tool/system for multi-touch devices that enables the creation, orbiting, and modification of 3D forms. In other words, a unique set of gestures and transient graphic manipulators are provided that are closely related to a 2D brushing workflow and can build up complex forms with little to no 3D modeling expertise. The tool is tightly integrated with 3D navigation and manages the coordinate system in a way that lets the user create geometry and edit geometry without changing modes or running specific tools/commands.
[0138] Unique aspects of the contriver tool include:
(1) an optimal modeling plane that is dependent on the optimal viewing angle;
(2) 3D modeling tools, that are activated and deactivated using a modeling plane and tumbling operation, and are always available outside of the plane without having to change modes;
(3) when a gesture is detected in the modeling plane, the system generates a corresponding form based on the stroke direction and the active modeling plane;
(a) the user can utilize two fingers to describe two stroke paths and the system will create a corresponding form; and
(b) the form can be edited by re-stroking inside the modeling plane limits and in relation to the current active modeling plane;
(4) when a gesture is detected over an existing shape, the system extends the form, following the stroke direction and the active modeling plane;
(a) the user can utilize two fingers and select two parts of the shape and the system will unite them with a bridge-like geometry;
(b) the form can be edited by re-stroking inside the modeling plane limits and in relation to the current active modeling plane; and (c) the end shape can be scaled; and
(5) the stroking and selection model in the 3D contriver tool are aware of symmetry and can drive creation/editing over a pre-defined mirror plane or a plane that is perpendicular to the current modeling plane.
[0139] FIG. 17 illustrates the logical flow for using the 3D contriver tool in accordance with one or more embodiments of the invention. The first action 1700 is that of activating the 3D contriver tool (i.e., the modeling tool 302). Once the tool 302 is activated, the multi-touch device detects either a two (2) finger drag 1702, a one (1) finger drag 1704, or a one (1) finger tap followed by a second finger tap 1706.
[0140] Regardless of whether a 2-fmger drag 1702, 1 -finger drag 1704, or taps 1706 are detected, the multi-touch device (via the contriver tool) determines the zone in which the touch event is detected. Zone B is also known as area 304, zone C is area 306, and zone D is area 308 of FIG. 3. The different zones 305-308 are used to determine whether a creation, modification, or orbiting operation are performed.
[0141] If a 2-finger drag 1702 occurs in zone B 304, a determination is made regarding whether geometry exists under the touch event at 1708. If no geometry exists under the touch event, a two-finger modeling operation is performed at 1710 (e.g., as described above with respect to FIG. 10).
[0142] If no geometry is detected under the touch event at 1708, or if the touch event occurs in zone C 306 or zone D 308, a navigation pan or zoom operation is performed based on the gesture at 1712 (i.e., as described above).
[0143] If a 1 -finger drag event 1704 is detected in zone B 304, a determination is made regarding whether there is geometry under the touch event at 1714. If no geometry is detected, a 1 -finger modeling operation is performed at 1716 (e.g., as described above). However, if geometry is detected, the system determines that a modeling operation from the face 1718 is to be performed. A determination is then made if the symmetry/modeling tool is also active at 1720. If the mirror/symmetry tool is not active, modeling from the face is performed without mirroring at 1722. Alternatively, if the mirror/symmetry tool is active, modeling from the face is performed with mirroring at 1724.
[0144] If a 1 -finger drag operation 1704 is detected in zone C 306, the system again determines if there is geometry under the touch event at 1726. If geometry exists under the touch event, a re-stroking operation 1728 is to be performed and the system determines if an orbit/tumbling operation has been performed to change the plane from which the 3D geometry was created at 1730. In other words, determination 1730 determines if the re-stroking is to be performed on the same/original plane as which the geometry was originally created. If a tumble/orbit operation has been performed (i.e., the plane on which the 3D geometry is displayed is different), the re- stroking is performed on the different plane at step 1732. If the plane has not been modified, the re-stroking is performed on the original plane at 1734.
[0145] Returning to determination 1726, if there is no geometry under the touch, or if the 1 -finger drag touch event 1704 occurred in zone D 308, a navigation (pan or zoom) operation is performed at 1736.
[0146] If a 1 -finger tap followed by a 2nd-fmger tap touch event 1706 is detected in zone B 304, a determination is again made regarding whether there is geometry under the touch event at step 1738. If geometry exists under the touch event, a 2-face bridging operation 1740 is performed. However, if no geometry exists under the touch event or if the tap operations occurred in zone C 306 and/or zone D 308, no action is performed at 1742.
[0147] In view of the above, it may be noted that all of the various operations including the creation of a 3D form, the modeling from the face (including a mirroring operation), the re-stroking of the 3D form (on the same or different planes), the creating of a bridge between two faces (i.e., from one tap location to a second tap location), and the tumbling/oribiting are all performed using the single 3D contriver tool without switching modes or requiring a complex set of user actions. Further, all of the actions described are performed dynamically as the user performs a gesture.
Conclusion
[0148] This concludes the description of the preferred embodiment of the invention. The following describes some alternative embodiments for accomplishing the present invention. For example, any type of computer, such as a multi-touch device, mainframe, minicomputer, personal computer, or computer configuration, such as a timesharing mainframe, local area network, or standalone personal computer could be used with the present invention.
[0149] In summary, embodiments of the invention provide a single tool that is displayed with different regions. The single tool provides the ability for the user to perform a variety of operations simply by beginning a touch event (or cursor click event) within a particular region. The tool may be used to navigate/tumble a 3D model, create a 3D geometric form (e.g., on a blank canvas or otherwise), and/or edit an existing 3D geometric form. The operation selected/performed is based on where the touch event begins and not where the gesture associated with the touch event progresses. Nonetheless, once an operation is selected, the operation is based on the user's gesture.
[0150] The foregoing description of the preferred embodiment of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many
modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. REFERENCES
[0151] [1] A.M. Abbas. A Subdivision Surface Interpolating Arbitrarily-Intersecting Network of Curves under Minimal Constraints. cgi2010.miralab.unige.ch, 0:0-3.
[0152] [2] H. Biermann, I.M. Martin, D. Zorin, and F. Bernardini. Sharp features on multiresolution subdivision surfaces. Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001, pages 140-149. [0153] [3] Adi Levin. Interpolating Nets of Curves by Smooth Subdivision
Surfaces. Proceedings of SIGGRAPH 99, Computer Graphics Proceedings, Annual Conference Series, pages 57-64, 1999.
[0154] [4] a. Nasri. Interpolation of Open Curves by Recursive Subdivision
Surfaces. Mathematics of Surfaces VII, pages 173-188, 1997.
[0155] [5] a. Nasri, W. Bou Karam, and F. Samavati. Sketch-based subdivision models. In Proceedings of the 6th Eurographics Symposium on Sketch-Based
Interfaces and Modeling - SBIM '09, volume 1, page 53, New York, New York, USA, 2009. ACM Press.
[0156] [6] A.H. Nasri. Constructing polygonal complexes with shape handles for curve interpolation by subdivision surfaces. 1. Computer-Aided Design, 33(11):753- 765, September 2001.
[0157] [7] Ahmad H. Nasri and Malcolm A. Sabin, Taxonomy of Interpolation Constraints on Recursive Subdivision Curves, The Visual Computer, Volume: 18, Issue: 5-6, Pages:382-403; 14 May 2002.
[0158] [8] S Schaefer, J Warren, and D Zorin. Lofting curve networks using subdivision surfaces. SGP 04 Proc of the 2004 EurographicsACM SIGGRAPH symposium on Geometry processing, pages 103-114, 2004.

Claims

WHAT IS CLAIMED IS:
1. A computer-implemented method for performing three-dimensional (3D) modeling comprising:
(a) activating, in a 3D modeling application executing on a computer, a modeling tool;
(b) displaying, in the 3D modeling application, a visual representation of a grid system tool on a digital modeling canvas wherein:
(i) the grid system is displayed in an active modeling plane;
(ii) the visual representation comprises three separate regions; and (iii) a type of operation performed in the 3D modeling application is controlled by which of the three separate regions a starting touch event of a gesture occurs in;
(c) displaying an existing 3D form on the grid system tool in the 3D modeling application;
(d) receiving the starting touch event of a gesture over the existing 3D form within one of the three separate regions; and
(e) as the gesture is received in the computer, dynamically extending the 3D form by adding 3D geometry to the 3D form, wherein the dynamically extending is based on the gesture and the active modeling plane.
2. The method of claim 1 , wherein the computer comprises a multi-touch device.
3. The method of claim 1, wherein:
a symmetry tool is active simultaneously with the modeling tool; and the dynamically extending simultaneously adds geometry to the 3D form from both a first location of the starting touch event and at a second location on the 3D form that mirrors the first location.
4. The method of claim 3, wherein:
the second location is based on a pre-defined mirror plane.
5. The method of claim 3, wherein:
the second location is based on a plane that is perpendicular to a current modeling plane.
6. A computer- implemented method for performing three-dimensional (3D) modeling comprising:
(a) activating, in a 3D modeling application executing on a computer, a modeling tool;
(b) displaying, in the 3D modeling application, a visual representation of a grid system tool on a digital modeling canvas wherein:
(i) the grid system is displayed in an active modeling plane;
(ii) the visual representation comprises three separate regions; and (iii) a type of operation performed in the 3D modeling application is controlled by which of the three separate regions a starting touch event of a gesture occurs in;
(c) displaying an existing 3D form on the grid system tool in the 3D modeling application;
(d) displaying a visual scale grip on the grid system tool in the 3D modeling application;
(e) receiving the starting touch event of a gesture over the visual scale grip form within one of the three separate regions; and
(f) as the gesture is received in the computer, dynamically scaling the 3D form, wherein the dynamically scaling is based on the gesture.
7. The method of claim 6, wherein the computer comprises a multi-touch device.
8. A computer- implemented method for performing three-dimensional (3D) modeling comprising:
(a) activating, in a 3D modeling application executing on a computer, a modeling tool;
(b) displaying, in the 3D modeling application, a visual representation of a grid system tool on a digital modeling canvas wherein:
(i) the grid system is displayed in an active modeling plane; (ϋ) the visual representation comprises three separate regions; and
(iii) a type of operation performed in the 3D modeling application is controlled by which of the three separate regions a starting touch event occurs in;
(c) displaying an existing 3D form on the grid system tool in the 3D modeling application;
(d) receiving the starting touch event at a first location over the existing 3D form, wherein the starting touch event comprises a first tap; and
(e) capturing a second touch event at a second different location over the existing 3D form, wherein the second touch event comprises a second tap;
(f) immediately upon capturing the second touch event, automatically and dynamically generating a connecting shape that forms a bridge from the first location to the second different location.
9. The method of claim 8, wherein the computer comprises a multi-touch device.
10. An apparatus for performing three-dimensional (3D) modeling in a multi-touch computer system comprising:
(a) a multi-touch computer device; and
(b) a 3D modeling application executing on the multi-touch computer device, wherein the 3D modeling application is configured to: (i) activate a modeling tool;
(ii) display a visual representation of a grid system tool on a digital modeling canvas wherein:
(1) the grid system is displayed in an active modeling plane;
(2) the visual representation comprises three separate regions; and
(3) a type of operation performed in the 3D modeling application is controlled by which of the three separate regions a starting touch event of a gesture occurs in;
(iii) display an existing 3D form on the grid system tool in the 3D modeling application;
(iv) receive the starting touch event of a gesture over the existing 3D form within one of the three separate regions; and
(v) as the gesture is received in the computer, dynamically extend the 3D form by adding 3D geometry to the 3D form, wherein the dynamically extending is based on the gesture and the active modeling plane.
11. The apparatus of claim 10, wherein:
a symmetry tool is active simultaneously with the modeling tool; and the dynamically extending simultaneously adds geometry to the 3D form from a first location of the starting touch event and at a second location on the 3D form that mirrors the first location.
12. The apparatus of claim 11, wherein:
the second location is based on a pre-defined mirror plane.
13. The apparatus of claim 11, wherein:
the second location is based on a plane that is perpendicular to a current modeling plane.
14. An apparatus for performing three-dimensional (3D) modeling in a multi-touch computer system comprising:
(a) a multi-touch computer device; and
(b) a 3D modeling application executing on the multi-touch computer device, wherein the 3D modeling application is configured to:
(i) activate a modeling tool;
(ii) display a visual representation of a grid system tool on a digital modeling canvas wherein:
(1) the grid system is displayed in an active modeling plane;
(2) the visual representation comprises three separate regions; and
(3) a type of operation performed in the 3D modeling application is controlled by which of the three separate regions a starting touch event of a gesture occurs in;
(iii) display an existing 3D form on the grid system tool in the 3D modeling application; and
(iv) display a visual scale grip on the grid system tool in the 3D modeling application;
(e) receive the starting touch event of a gesture over the visual scale grip form within one of the three separate regions; and
(f) as the gesture is received in the computer, dynamically scale the 3D form, wherein the dynamically scaling is based on the gesture.
15. An apparatus for performing three-dimensional (3D) modeling in a multi-touch computer system comprising:
(a) a multi-touch computer device; and
(b) a 3D modeling application executing on the multi-touch computer device, wherein the 3D modeling application is configured to:
(i) activate a modeling tool;
(ii) display a visual representation of a grid system tool on a digital modeling canvas wherein:
(1) the grid system is displayed in an active modeling plane;
(2) the visual representation comprises three separate regions; and
(3) a type of operation performed in the 3D modeling application is controlled by which of the three separate regions a starting touch event of a gesture occurs in;
(iii) display an existing 3D form on the grid system tool in the 3D modeling application;
(iv) receive the starting touch event at a first location over the existing 3D form, wherein the starting touch event comprises a first tap;
(v) capture a second touch event at a second different location over the existing 3D form, wherein the second touch event comprises a second tap; and
(vi) immediately upon capturing the second touch event, automatically and dynamically generate a connecting shape that forms a bridge from the first location to the second different location.
PCT/US2012/021448 2012-01-16 2012-01-16 Gestures and tools for creating and editing solid models WO2013109246A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2012/021448 WO2013109246A1 (en) 2012-01-16 2012-01-16 Gestures and tools for creating and editing solid models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2012/021448 WO2013109246A1 (en) 2012-01-16 2012-01-16 Gestures and tools for creating and editing solid models

Publications (1)

Publication Number Publication Date
WO2013109246A1 true WO2013109246A1 (en) 2013-07-25

Family

ID=48799527

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/021448 WO2013109246A1 (en) 2012-01-16 2012-01-16 Gestures and tools for creating and editing solid models

Country Status (1)

Country Link
WO (1) WO2013109246A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123747A (en) * 2014-07-17 2014-10-29 北京毛豆科技有限公司 Method and system for multimode touch three-dimensional modeling
CN104881260A (en) * 2015-06-03 2015-09-02 武汉映未三维科技有限公司 Projection image realization method and realization device thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132460A1 (en) * 2004-12-22 2006-06-22 Microsoft Corporation Touch screen accuracy
US20080036773A1 (en) * 2006-02-21 2008-02-14 Seok-Hyung Bae Pen-based 3d drawing system with 3d orthographic plane or orthrographic ruled surface drawing
US20080165140A1 (en) * 2007-01-05 2008-07-10 Apple Inc. Detecting gestures on multi-event sensitive devices
US20090021475A1 (en) * 2007-07-20 2009-01-22 Wolfgang Steinle Method for displaying and/or processing image data of medical origin using gesture recognition
WO2010056427A1 (en) * 2008-11-14 2010-05-20 Exxonmobil Upstream Research Company Forming a model of a subsurface region
US20100149211A1 (en) * 2008-12-15 2010-06-17 Christopher Tossing System and method for cropping and annotating images on a touch sensitive display device
US20110164029A1 (en) * 2010-01-05 2011-07-07 Apple Inc. Working with 3D Objects
US20110296353A1 (en) * 2009-05-29 2011-12-01 Canesta, Inc. Method and system implementing user-centric gesture control

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132460A1 (en) * 2004-12-22 2006-06-22 Microsoft Corporation Touch screen accuracy
US20080036773A1 (en) * 2006-02-21 2008-02-14 Seok-Hyung Bae Pen-based 3d drawing system with 3d orthographic plane or orthrographic ruled surface drawing
US20080165140A1 (en) * 2007-01-05 2008-07-10 Apple Inc. Detecting gestures on multi-event sensitive devices
US20090021475A1 (en) * 2007-07-20 2009-01-22 Wolfgang Steinle Method for displaying and/or processing image data of medical origin using gesture recognition
WO2010056427A1 (en) * 2008-11-14 2010-05-20 Exxonmobil Upstream Research Company Forming a model of a subsurface region
US20100149211A1 (en) * 2008-12-15 2010-06-17 Christopher Tossing System and method for cropping and annotating images on a touch sensitive display device
US20110296353A1 (en) * 2009-05-29 2011-12-01 Canesta, Inc. Method and system implementing user-centric gesture control
US20110164029A1 (en) * 2010-01-05 2011-07-07 Apple Inc. Working with 3D Objects

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123747A (en) * 2014-07-17 2014-10-29 北京毛豆科技有限公司 Method and system for multimode touch three-dimensional modeling
CN104123747B (en) * 2014-07-17 2017-10-27 北京毛豆科技有限公司 Multimode touch-control three-dimensional modeling method and system
CN104881260A (en) * 2015-06-03 2015-09-02 武汉映未三维科技有限公司 Projection image realization method and realization device thereof
CN104881260B (en) * 2015-06-03 2017-11-24 武汉映未三维科技有限公司 A kind of projection print implementation method and its realization device

Similar Documents

Publication Publication Date Title
US9182882B2 (en) Dynamic creation and modeling of solid models
US8947429B2 (en) Gestures and tools for creating and editing solid models
US11687230B2 (en) Manipulating 3D virtual objects using hand-held controllers
Millette et al. DualCAD: integrating augmented reality with a desktop GUI and smartphone interaction
Alkemade et al. On the efficiency of a VR hand gesture-based interface for 3D object manipulations in conceptual design
US8878845B2 (en) Expandable graphical affordances
Kang et al. Instant 3D design concept generation and visualization by real-time hand gesture recognition
Shiratuddin et al. Non-contact multi-hand gestures interaction techniques for architectural design in a virtual environment
Schmidt et al. Sketching and composing widgets for 3d manipulation
US8902222B2 (en) Three dimensional contriver tool for modeling with multi-touch devices
WO2013109245A1 (en) Dynamic creation and modeling of solid models
Kang et al. Editing 3D models on smart devices
US8334869B1 (en) Method and apparatus for modeling 3-D shapes from a user drawn curve
Cordeiro et al. A survey of immersive systems for shape manipulation
WO2013109246A1 (en) Gestures and tools for creating and editing solid models
Felbrich et al. Self-organizing maps for intuitive gesture-based geometric modelling in augmented reality
US11398082B2 (en) Affine transformations of 3D elements in a virtual environment using a 6DOF input device
Schkolne et al. Surface drawing.
Araújo et al. Combining Virtual Environments and Direct Manipulation for Architectural Modeling'
EP2887195B1 (en) A computer-implemented method for designing a three-dimensional modeled object
Han et al. Ar pottery: Experiencing pottery making in the augmented space
Lai et al. As sketchy as possible: Application Programming Interface (API) for sketch-based user interface
Bae et al. Digital styling for designers: in prospective automotive design
KR20150079453A (en) A computer-implemented method for designing a three-dimensional modeled object
Palleis et al. Novel indirect touch input techniques applied to finger-forming 3d models

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12865675

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12865675

Country of ref document: EP

Kind code of ref document: A1