GB2540791A - Apparatus, methods, computer programs and non-transitory computer-readable storage media for generating a three-dimensional model of an object - Google Patents

Apparatus, methods, computer programs and non-transitory computer-readable storage media for generating a three-dimensional model of an object Download PDF

Info

Publication number
GB2540791A
GB2540791A GB1513264.0A GB201513264A GB2540791A GB 2540791 A GB2540791 A GB 2540791A GB 201513264 A GB201513264 A GB 201513264A GB 2540791 A GB2540791 A GB 2540791A
Authority
GB
United Kingdom
Prior art keywords
spine
path
line
data
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1513264.0A
Other versions
GB201513264D0 (en
Inventor
Athimoolam Kesavan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dexter Consulting Uk Ltd
Original Assignee
Dexter Consulting Uk Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dexter Consulting Uk Ltd filed Critical Dexter Consulting Uk Ltd
Priority to GB1513264.0A priority Critical patent/GB2540791A/en
Publication of GB201513264D0 publication Critical patent/GB201513264D0/en
Publication of GB2540791A publication Critical patent/GB2540791A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An apparatus configured to generate a 3D model of an object comprises an input module configured to obtain input data defining a plurality of line-segments in a two-dimensional space. The plurality of line-segments comprises a first line-segment representing a slice-path 1605 and a second line-segment representing a spine-path 1601. The apparatus also comprises a three-dimensional model generator module configured to generate the three-dimensional model 1607, 1608 of the object using data derived from the first line-segment and the second line segment.

Description

APPARATUS, METHODS. COMPUTER PROGRAMS AND NON-TRANSITQRY COMPUTER-READABLE STORAGE MEDIA FOR GENERATING A TKRF.F-DIMENSIONAL MODEL OF AN OBJECT
Technical Field
The present invention relates to apparatus, methods, computer programs and non-transitory computer-readable storage media for generating a three-dimensional model of an object.
Background
Known three-dimensional modelling software applications allow a user to create a three-dimensional model of an object. Such applications typically have a steep learning curve and a non-intuitive interface, even for basic workflow operations. Significant time, effort and user interaction are required to input data and modification commands in a format decipherable by the software application to create even a basic three-dimensional model.
Summary
In accordance with first embodiments, there is provided an apparatus configured to generate a three-dimensional model of an object, the apparatus comprising: an input module configured to obtain input data defining a plurality of line-segments in a two-dimensional space, the plurality of line-segments comprising a first line-segment representing a slice-path and a second line-segment representing a spine-path; and a three-dimensional model generator module configured to generate the three-dimensional model of the object using data derived from the first line-segment and the second line-segment.
In accordance with second embodiments, there is provided a method of generating a three-dimensional model of an object, the method comprising: obtaining input data defining a plurality of line-segments in a two-dimensional space, the plurality of line-segments comprising a first line-segment representing a slice-path and a second line-segment representing a spine-path; and generating the three-dimensional model of the object using data derived from the first line-segment and the second line-segment.
In accordance with third embodiments, there is provided a computer program arranged when executed to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data defining a plurality of line-segments in a two-dimensional space, the plurality of line-segments comprising a first line-segment representing a slice-path and a second line-segment representing a spine-path; and generating the three-dimensional model of the object using data derived from the first line-segment and the second line-segment.
In accordance with fourth embodiments, there is provided a non-transitory computer-readable storage medium comprising computer-executable instructions which, when executed by a processor, cause a computing device to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data defining a plurality of line-segments in a two-dimensional space, the plurality of line-segments comprising a first line-segment representing a slice-path and a second line-segment representing a spine-path; and generating the three-dimensional model of the object using data derived from the first line-segment and the second line-segment.
In accordance with fifth embodiments, there is provided an apparatus configured to generate a three-dimensional model of an object, the apparatus comprising: an input module configured to obtain input data; a three-dimensional model generator module configured to generate the three-dimensional model of the object using data derived from the input data; and an evolution module configured to modify at least one feature of the three-dimensional model of the object based at least in part on one or more target attributes for the three-dimensional model.
In accordance with sixth embodiments, there is provided a computer-implemented method of generating a three-dimensional model of an object, the method comprising: obtaining input data; generating the three-dimensional model of the object using data derived from the input data; and modifying at least one feature of the three-dimensional model of the object based at least in part on one or more target attributes for the three-dimensional model.
In accordance with seventh embodiments, there is provided a computer program arranged when executed to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data; generating the three-dimensional model of the object using data derived from the input data; and modifying at least one feature of the three-dimensional model of the object based at least in part on one or more target attributes for the three-dimensional model.
In accordance with eighth embodiments, there is provided a non-transitory computer-readable storage medium comprising computer-executable instructions which, when executed by a processor, cause a computing device to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data; generating the three-dimensional model of the object using data derived from the input data; and modifying at least one feature of the three-dimensional model of the object based at least in part on one or more target attributes for the three-dimensional model.
In accordance with ninth embodiments, there is provided an apparatus configured to generate a three-dimensional model of an object, the apparatus comprising: an input module configured to obtain input data, the input data comprising data associated with one or more marker-objects; a three-dimensional model generator module configured to generate the three-dimensional model of the object using data derived from the input data; and a post-processor module configured to perform one or more post-processing operations in relation to the three-dimensional model of the object, wherein the one or more post-processing operations comprises defining at least one marker-object attribute of one or more marker-objects using data derived from the input data.
In accordance with tenth embodiments, there is provided a computer-implemented method of generating a three-dimensional model of an object, the method comprising: obtaining input data, the input data comprising data associated with one or more marker-objects; generating the three-dimensional model of the object using data derived from the input data; and performing one or more post-processing operations in relation to the three-dimensional model of the object, wherein the one or more post-processing operations comprises defining at least one marker-object attribute of one or more marker-objects using data derived from the input data.
In accordance with eleventh embodiments, there is provided computer program arranged when executed to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data, the input data comprising data associated with one or more marker-objects; generating the three-dimensional model of the object using data derived from the input data; and performing one or more post-processing operations in relation to the three-dimensional model of the object, wherein the one or more post-processing operations comprises defining at least one marker-object attribute of one or more marker-objects using data derived from the input data.
In accordance with twelfth embodiments, there is provided non-transitory computer-readable storage medium comprising computer-executable instructions which, when executed by a processor, cause a computing device to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data, the input data comprising data associated with one or more marker-objects; generating the three-dimensional model of the object using data derived from the input data; and performing one or more post-processing operations in relation to the three-dimensional model of the object, wherein the one or more post-processing operations comprises defining at least one marker-object attribute of one or more marker-objects using data derived from the input data.
Further features and advantages of the invention will become apparent from the following description of preferred embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1 is a block diagram of an example of an apparatus in accordance with an embodiment of the present invention;
Figure 2 is a block diagram of an example of a system comprising an apparatus in accordance with an embodiment of the present invention;
Figure 3 is a schematic representation of a system comprising an apparatus in accordance with an embodiment of the present invention;
Figure 4 is a schematic representation of a system comprising an apparatus in accordance with an embodiment of the present invention;
Figure 5 is a schematic representation of a system comprising an apparatus in accordance with an embodiment of the present invention;
Figure 6 is a schematic representation of a system comprising an apparatus in accordance with an embodiment of the present invention;
Figure 7 is a block diagram of a sub-component of the apparatus of Figure 2, the sub-component being a three-dimensional model generator module;
Figure 8 is a diagram showing a table identifying examples of input entities and examples of associated primitive types;
Figures 9A to 9E are diagrams showing examples of slice-paths;
Figures lOAto 10D are diagrams showing examples of spine-paths;
Figure 11 is a diagram showing examples of state transitions between one or more spines and/or one or more non-spines;
Figure 12 is a flowchart showing an example of a method performed by a subcomponent of the apparatus of Figure 2, the sub-component being an input parser module;
Figure 13 is a flowchart showing an example of a method performed by a subcomponent of the apparatus of Figure 2, the sub-component being a geometry generator module;
Figure 14A is a diagram showing an example of input data;
Figures 14B to 14J are diagrams representing acts performed by a subcomponent of the apparatus of Figure 2 based on the input data of Figure 14A, the subcomponent being a geometry generator module;
Figures 14K and 14L are diagrams showing examples of three-dimensional geometries generated based on the input data of Figure 14A and the acts of Figures 14B to 14J;
Figure 15A is a diagram showing an example of input data;
Figures 15B to 15K are diagrams representing acts performed by a subcomponent of the apparatus of Figure 2 based on the input data of Figure 15A, the subcomponent being a geometry generator module;
Figures 15L and 15M are diagrams showing examples of three-dimensional geometries generated based on the input data of Figure 15A and the acts of Figures 15B to 15K;
Figures 16 to 23 are diagrams showing examples of input data and examples of corresponding generated three-dimensional geometries;
Figures 24A to 241 are diagrams showing examples of input data;
Figures 25A to 251 are diagrams showing examples of acts performed by a subcomponent of the apparatus of Figure 2 based on the input data of Figures 24A to 241 respectively, the sub-component being a geometry generator module;
Figures 26 A to 34D are diagrams showing examples of input data and examples of corresponding generated three-dimensional geometries;
Figures 35 A to 35F are diagrams showing examples of input data;
Figures 36A to 36F are diagrams showing examples of input data;
Figures 37A to 41B are diagrams showing examples of input data and examples of corresponding generated three-dimensional geometries;
Figure 42A is a diagram showing an example of input data;
Figure 42B is a diagram showing an example of an act performed by a subcomponent of the apparatus of Figure 2 based on the input data of Figure 42A, the subcomponent being a geometry generator module;
Figure 42C is a diagram showing an example of a three-dimensional geometry generated based on the input data of Figure 42A and the act of Figure 42B;
Figure 43 is a diagram showing an example of input data;
Figure 44A is a diagram showing an example of input data;
Figures 44B and 44C are diagrams showing examples of three-dimensional geometries generated based on the input data of Figure 44A;
Figure 45 is a flowchart showing an example of a method performed by a subcomponent of the apparatus of Figure 2, the sub-component being a post-processor module;
Figure 46A is a diagram showing an example of input data;
Figures 46B to 46G are diagrams showing representations of examples of parameters used by a sub-component of the apparatus of Figure 2 based on the input data of Figure 46 A, the sub-component being a post-processor module;
Figures 47A to 47C are diagrams showing representations of examples of parameters used by a sub-component of the apparatus of Figure 2, the sub-component being a post-processor module;
Figure 48A is a diagram showing an example of input data;
Figures 48B to 48D are diagrams showing representations of examples of parameters used by a sub-component of the apparatus of Figure 2 based on the input data of Figure 48 A, the sub-component being a post-processor module;
Figure 49A is a diagram showing an example of input data;
Figures 49B and 49C are diagrams showing representations of examples of parameters used by a sub-component of the apparatus of Figure 2 based on the input data of Figure 49A, the sub-component being a post-processor module;
Figure 50A is a diagram showing an example of input data;
Figures 50B to 50D are diagrams showing representations of examples of parameters used by a sub-component of the apparatus of Figure 2 based on the input data of Figure 50A, the sub-component being a post-processor module;
Figure 51A is a diagram showing an example of input data;
Figure 5 IB is a diagram showing examples of three-dimensional geometries generated based on the input data of Figure 51 A;
Figure 51C is a diagram showing a table illustrating a method performed by a sub-component of the apparatus of Figure 2, the sub-component being an evolution module;
Figure 52 is a flowchart showing an example of a method performed a subcomponent of the apparatus of Figure 2, the sub-component being an evolution module;
Figure 53 is a flowchart showing an example of a method performed a subcomponent of the apparatus of Figure 2, the sub-component being an evolution module;
Figure 54 is a flowchart showing an example of a method performed a subcomponent of the apparatus of Figure 2, the sub-component being an evolution module; and
Figure 55 is a flowchart showing an example of a method performed a subcomponent of the apparatus of Figure 2, the sub-component being an evolution module.
Detailed Description
Referring to Figure 1, there is shown a block diagram of an apparatus 100. The apparatus is configured to generate a three-dimensional model of an object.
The apparatus 100 may take various different forms. Examples of forms of the apparatus 100 include, but are not limited to, a handheld computing device, a personal computer, a smartphone, a laptop, a tablet computing device, an image scanning device, an object scanning device, a video game console, a device connected to a computing device and a server.
In this example, the apparatus 100 comprises one or more processors 101 configured to process information and/or instructions. The one or more processors 101 may comprise a central processing unit (CPU). The one or more processors 101 may comprise a graphics processing unit (GPU). The one or more processors 101 are coupled with a bus 102. Operations performed by the one or more processors 101 may be carried out by hardware and/or software.
In this example, the apparatus 100 comprises computer-useable volatile memory 103 configured to store information and/or instructions for the one or more processors 101. The computer-useable volatile memory 103 is coupled with the bus 102. The computer-useable volatile memory 103 may comprise random access memory (RAM).
In this example, the apparatus 100 comprises computer-useable non-volatile memory 104 configured to store information and/or instructions for the one or more processors 101. The computer-useable non-volatile memory 104 is coupled with the bus 102. The computer-useable non-volatile memory 104 may comprise read-only memory (ROM).
In this example, the apparatus 100 comprises one or more data-storage units 105 configured to store information and/or instructions. The one or more data-storage units 105 are coupled with the bus 102. The one or more data-storage units 105 may for example comprise a magnetic or optical disk and disk drive. In some examples, the one or more data-storage units 105 are provided outside of the apparatus 100, for example as part of a network storage arrangement.
In this example, the apparatus 100 comprises one or more input/output (I/O) devices 106 configured to communicate information to the one or more processors 101. The one or more I/O devices 106 are coupled with the bus 102. The one or more I/O devices 106 may comprise at least one network interface. The at least one network interface may enable the apparatus 100 to communicate via one or more data communications networks. Examples of data communications networks include, but are not limited to, the Internet, a Local Area Network (LAN) and a wide area network (WAN). The one or more EO devices 106 may enable a user to provide input to the apparatus 100 via one or more input devices (not shown). Examples of input devices include, but are not limited to, a keyboard, a mouse, a joystick, a touchpad/track-pad, a motion-capture device, a touch-sensitive screen display, a digital drawing surface, an image-capture device and/or a microphone. The one or more I/O devices 106 may enable information to be provided to a user via one or more output devices (not shown). Examples of output devices include, but are not limited to, a computer monitor, a display screen, a touch-sensitive display screen, a printer, a speaker and/or a television.
Various other entities are depicted for the apparatus 100. For example, when present, an operating system 107, a three-dimensional model generation system 108, one or more modules 109, and data 110 are shown as residing in one, or a combination, of the computer-usable volatile memory 103, computer-usable non-volatile memory 104 and the one or more data-storage units 105. The three-dimensional model generation system 108 may be implemented by way of computer program code stored in memory locations within the computer-usable non-volatile memory 104, computer-readable storage media within the one or more data-storage units 105 and/or other tangible computer-readable storage media.
Although at least some aspects of the examples described herein with reference to the drawings comprise computer processes performed in processing systems or processors, examples described herein also extend to computer programs, for example computer programs on or in a carrier, adapted for putting the examples into practice. The carrier may be any entity or device capable of carrying the program.
It will be appreciated that the apparatus 100 may comprise more, fewer and/or different components from those depicted in Figure 1.
Referring to Figure 2, there is shown schematically an example of a system 200. The system 200 comprises an apparatus 201. The apparatus 201 corresponds to the apparatus 100 described above with reference to Figure 1. The apparatus 201 comprises a plurality of modules, which will be described in more detail below. The term ‘module’ is used herein to denote a component of the apparatus 201. A module may be embodied in hardware and/or software.
In this example, the apparatus 201 comprises an input module 202. The input module 202 is configured to obtain input data 203. The input data 203 defines a plurality of line-segments in a two-dimensional space. The plurality of line-segments includes a first line-segment and a second line-segment. The first line-segment represents a slice-path. The second line-segment represents a spine-path. Slice-paths and spine-paths will be described in more detail below.
In this example, the apparatus 201 comprises a three-dimensional model generator module 204. The three-dimensional model generator module 204 is configured to receive the input data 203 and/or data derived therefrom from the input module 202. The three-dimensional model generator module 204 is configured to analyse the input data 203 and/or data derived therefrom. The three-dimensional model generator module 204 is configured to generate a three-dimensional model of an object using data derived from the first line-segment and the second line-segment.
The term ‘three-dimensional object’ is used herein to mean an entity that can be represented in three-dimensional space. The object may be a real-world (‘physical’) object. Alternatively, the object might not correspond to a real-world object.
The term ‘three-dimensional model’ is used herein to mean data representing an object in three-dimensional space. The three-dimensional model may for example comprise a mathematical representation of the object. The three-dimensional model may for example represent the object in terms of vertices and/or faces. The three-dimensional model may for example represent the object in a digital format in a computing device. The three-dimensional model may be stored and/or communicated in different formats. For example, the three-dimensional model could be communicated as an attachment to an e-mail or could be communicated via a data communications network such as the Internet.
The term ‘three-dimensional geometry’ is used herein to mean a representation of an object in three-dimensional space, for example in the form of one or more vertices and one or more faces.
In this example, the apparatus 201 comprises a modification and assembly module 205. The modification and assembly module 205 is configured to perform one or more modification and/or assembly operations. An example of a modification operation is modifying the three-dimensional geometry associated with a three-dimensional model.
In this example, the apparatus 201 comprises a scripting module 206. The scripting module 206 is configured to perform one or more scripting operations. An example of a scripting operation is executing one or more rules and/or commands.
In this example, the apparatus 201 comprises an analysis module 207. The analysis module 207 is configured to perform one or more analysis operations. An example of an analysis operation is analysing a creation history of the three-dimensional model.
In this example, the apparatus 201 comprises an evolution module 208. The evolution module 208 is configured to perform one or more evolution operations. An example of an evolution operation is modifying one or more attributes of a three-dimensional model based at least in part on one or more evolution constraints. An example of such an attribute is the shape of a three-dimensional geometry associated with a three-dimensional model.
In this example, the apparatus 201 comprises a management module 209. The management module 209 is configured to perform one or more management operations. The management module may for example handle decisions on user management, workspace session management and/or file management. An example of a management operation is managing access to three-dimensional models. Another example of a management operation is managing access to data associated with a three-dimensional model. Another example of a management operation is managing access to one or more storage locations. Another example of a management operation is managing access to one or more scripts. Another example of a management operation is managing a user session. Another example of a management operation is restricting file access to one or more protected, private files and/or one or more storage locations. Another example of a management operation is managing login, logout and/or registration actions by a user. Another example of a management operation is deciding and managing the storage location for user-submitted files. Another example of a management operation is providing a mechanism by which files may be imported and opened in the three-dimensional workspace. Another example of a management operation is enabling storage, modification and/or deletion of files in the storage location.
In this example, the apparatus 201 comprises a workspace module 210. The workspace module 210 is configured to allow user interaction with a three-dimensional geometry.
In this example, the apparatus 201 comprises a storage module 211. The storage module 211 is configured to retrieve data from memory and/or store data in memory.
In this example, the apparatus 201 comprises a display module 212. The display module 212 is configure to facilitate display of information to a user.
In this example, the apparatus 201 comprises a file exporter module 213. The file exporter module 213 is configured to export data in a specific file format. Examples of such formats include, but are not limited to, STL and OBJ formats. The file exporter module 213 may be configured to export three-dimensional models created using the apparatus 201 into a format acceptable to a 3D-printer, or in a format required by the user. Examples of such file formats include, but are not limited to, STL, OBJ and G-code. A 3D printer 215 may use the 3D-printable file 214 to produce a real-world (‘physical’) object 216. A three-dimensional model generated using the apparatus 201 described herein may be useable in fields such as, but not limited to, computer-aided design (CAD), animation, 3D-printing, virtual environment creation and game development.
Referring to Figure 3, there is shown schematically an example of an apparatus 300. In this example, the apparatus 300 comprises a processing device 301, a memory 302 and a display device 303. In this example, the apparatus 300 comprises an input module 304. The input module 304 is configured to receive input data 305. There are various ways in which the input module 304 may obtain the input data 305.
In some examples, the input module 304 is configured to obtain the input data 305 by a user creating the input data using a user interface. In some examples, the user may create the user input 305 by drawing on a digital drawing surface (or ‘graphics tablet’) or another digital tablet, for example using their fmger(s), a stylus, a digital pen, or in another way. In some examples, the user may create the user input 305 by drawing on a touchscreen device, for example using their finger(s), a stylus or in another way. As such the input data may be hand-drawn. In some examples, the user may create the user input 305 using another input peripheral.
In some examples, the input module 304 is configured to obtain the input data 305 in the form of a data file comprising the input data. In some examples, the input module 304 is configured to obtain the input data 305 via an interface that enables the input module 304 to connect to a computer network. In some examples, the input module 304 is configured to obtain the input data 305 in the form of a photograph. In some examples, the input module 304 is configured to obtain the input data 305 in another manner.
In some examples, the input data 305 comprises or defines one or more data items (or ‘input entities’). An example of an input entity is a geometric object in a two-dimensional space. An example of a geometric object is a line-segment. The term ‘line-segment’ is used herein to mean an extent of length. A line-segment may be drawn in two-dimensional space, analogous to a line drawn with a pen on paper. A line-segment may be straight or curved. A line-segment may be the form of a stroke or path. A closed line-segment may form a shape. Examples of shapes include, but are not limited to, squares, circles and triangles.
In some examples, the input data 305 is in the form of a sketch.
In some examples, the input data 305 comprises or defines text data. Text data may be entered in various different ways. Examples of text data entry mechanisms include, but are not limited, to embedding text in a Scalable Vector Graphics (SVG) file, as a text string in SVG format, typing/writing the text using a keyboard, mouse or j oystick, writing on a digital drawing surface and/or writing on a touch-sensitive display screen.
In a digital computing device, line-segments and/or shapes may be represented by different types of data file. Examples include, but are not limited to, a raster image containing pixel data and SVG format data.
An SVG file stores line-segments and/or shapes in text format. The text in the SVG file defines a combination of parameterized non-Bezier curves, Bezier curves, line-segments, curves connecting a series of points or line-segments connecting series of points. Many existing software applications enable creation of SVG files. Such applications include tools and/or commands to aid drawing of standard line-segments and shapes like ellipses, circles, rectangles and squares.
The input data 305 may comprise a combination of line-segments, shapes, images, text data and/or computer-readable instructions embedded within an SVG file, or SVG data represented by text or image input.
The input data 305 may comprise an input raster image or non-SVG image from an image file from which the data is extracted for example by direct pixel information parsing or through conversion to SVG format.
In some examples, the input data 305 comprises or defines image data. Image data may for example be input using an image captured by an image-capture device, such as a camera.
Referring to Figure 4, there is shown schematically an example of system 400. In this example, a user provides input data by drawing on a touch-sensitive screen 401 of an apparatus in the form of a digital tablet device 402 using a stylus 403. As will be explained below, the input data 404 displayed on the screen 401, when processed, generates a three-dimensional geometry of a cuboid.
Referring to Figure 5, there is shown schematically an example of system 500. In this example, a user provides input data by taking a photograph of an item 501 drawn on a piece of paper 502. The photograph is taken using an apparatus 503 in the form of a smartphone with an image-capture device 504. An example of an image-capture device 504 is a camera. In some examples, the captured photographic image 505 is processed in the apparatus 503. Processing may include parsing the image data file as will be described in more detail below. The captured photographic image 505 may be converted to a different format. For example, the captured photographic image 505 may be converted to SVG format. As will be explained below, the captured photographic image 505, when processed, generates a three-dimensional geometry of a cone.
Referring to Figure 6, there is shown schematically an example of a system 600. In this example, a user provides input data by creating a drawing on a display device 601 of an apparatus in the form of a computer 602, using a mouse 603 and/or a keyboard 604. As will be explained below, the input data 605 drawn on the display device 601, when processed, generates a three-dimensional geometry of a torus.
Referring to Figure 7, there is shown schematically an example of a three-dimensional model generator module 700. In this example, the three-dimensional model generator module 700 comprises an input parser module 701. In this example, the three-dimensional model generator module 700 comprises a geometry generator module 702. In this example, the three-dimensional model generator module 700 comprises a post-processor module 703.
In some examples, the input parser module 701 is configured to receive input data via an input module.
In some examples, the input parser module 701 is configured to identify one or more input entities in the received input data. Examples of input entities include, but are not limited to, paths, shapes, images and text.
In some examples, the input parser module 701 is configured to assign one or more user-specified attributes to the one or more identified input entities. An example of a user-specified attribute is an input entity identifier, for example a name.
In some examples, the input parser module 701 is configured to categorise the one or more identified input entities. Categorising the one or more identified input entities may comprise determining a ‘primitive type’ associated with the one or more identified input entities. An input entity for which an associated primitive type has been determined is referred to hereinafter as a ‘primitive’. A primitive may be considered to be a categorised input entity. Primitives will now be described.
Referring to Figure 8, there is shown a table identifying examples of different types of input entity and corresponding different possible primitive types. A first type of input entity is a two-dimensional stroke. The term ‘two-dimensional stroke’ is used herein to mean a line-segment, path or stroke in a two-dimensional space. The line-segment may be straight or curved. The two-dimensional stroke need not have two dimensions itself. For example, a straight line-segment is considered herein to be a two-dimensional stroke. An example of a primitive type associated with a two-dimensional stroke input entity is a spine-path. Another example of a primitive type associated with a two-dimensional stroke input entity is a spine-range-selector-path. Another example of a primitive type associated with a two-dimensional stroke input entity is a silhouette-outline-path. Another example of a primitive type associated with a two-dimensional stroke input entity is a width-line. Another example of a primitive type associated with a two-dimensional stroke input entity is an association-line. Another example of a primitive type associated with a two-dimensional stroke input entity is a z-path. A second type of input entity is a two-dimensional shape. The term ‘two-dimensional shape’ is used herein to mean the form of an outline or boundary of an object. A two-dimensional shape may be formed using one or more two-dimensional strokes. A two-dimensional shape may be formed by a closed line-segment. An example of a primitive type associated with a two-dimensional path input entity is a slice-path. Another example of a primitive type associated with a two-dimensional path input entity is a marker-point. Another example of a primitive type associated with a two-dimensional path input entity is a spine-marker. Another example of a primitive type associated with a two-dimensional path input entity is a slice-marker. Another example of a primitive type associated with a two-dimensional path input entity is a silhouette-marker. Another example of a primitive type associated with a two-dimensional path input entity is a marker in a marker-map. Another example of a primitive type associated with a two-dimensional path input entity is an association-shape. Another example of a primitive type associated with a two-dimensional path input entity is a spine-point. Another example of a primitive type associated with a two-dimensional path input entity is a custom path. A third type of input entity is text. An example of a primitive type associated with a text input entity is a rule. Another example of a primitive type associated with a text input entity is a variable declaration. Another example of a primitive type associated with a text input entity is a definition. Another example of a primitive type associated with a text input entity is a non-variable declaration. Another example of a primitive type associated with a text input entity is a computer-readable instruction. Another example of a primitive type associated with a text input entity is an associated name of a primitive. A fourth type of input entity is an image. An example of a primitive type associated with an image input entity is a displacement-map. Another example of a primitive type associated with an image input entity is a hole-map. Another example of a primitive type associated with an image input entity is a marker-map. Another example of a primitive type associated with an image input entity is a colour-map.
An explanation of these primitive types will now be provided. A slice-path may be used to define a slice through at least part of the three-dimensional geometry. The slice does not necessarily have the same geometric properties as the slice-path used to define it. The slice may be a cross-section. The slice may be planar. A slice-path may be ‘closed’ or ‘open’. The term ‘closed slice-path’ is used herein to denote a slice-path that starts and ends at the same point. The term ‘open slice-path’ is used herein to denote a slice-path that starts and ends at different points. A closed slice-path facilitates distribution of equidistant segment points along the slice-path during three-dimensional geometry creation procedure, as will be described in more detail below.
Referring to Figures 9A to 9E, various examples of slice-paths are shown.
Referring to Figure 9A, a first example slice-path 900 is generally elliptical in shape. Referring to Figure 9B, a second example slice-path 901 is generally rectangular in shape. Referring to Figure 9C, a third example slice-path 902 is an irregular shape. Referring to Figure 9D, a fourth example slice-path 903 is generally in the shape of a five-pointed star. Referring to Figure 9E, a fifth example slice-path 904 is an irregular shape. A user may be able to specify and/or enable an indication of a start-marker and/or an end-marker of a slice-path. A start-marker and an end-marker may aid visual identification of the start-point and end-point of the slice-path respectively. A start-marker is depicted in relation to the fifth slice-path 904 by a dot. An end-marker is depicted in relation to the fifth slice-path 904 by an arrow. Other symbols could be used to designate a start-marker and/or end-marker. A spine-path is a path that is used to construct a three-dimensional geometry. For example, the three-dimensional geometry may be constructed along the spine-path. A geometric property of a spine-path may influence a geometric property of a three-dimensional geometry constructed using the spine-path. The term spine-path is used because the spine-path, or data derived from it, is analogous to a spine or backbone of the three-dimensional geometry.
In some examples, a spine-path defines a path of protrusion of one or more slice-paths used to generate the three-dimensional geometry. The term ‘protrusion’ is used herein to include operations where one or more slice-paths are, or slice data derived therefrom is, protruded along the spine-path. Examples of protrusion operations include, but are not limited to, extrusion, sweeping and lofting. In some examples, a spine-path and/or data associated with a spine-path defines a direction of protrusion in relation to a slice-paths used to generate the three-dimensional geometry. A spine-path may be the longest path within a three-dimensional geometry. A start-point and end-point of a spine-path may lie on a surface of the three-dimensional geometry.
An input entity representing a spine-path may be distinguished from other types of input entity (for example other strokes) in input data by it having at least one given value of at least one given attribute associated with the spine-path primitive type. Examples of such attributes include, but are not limited to, colour, line thickness and line style. For example, an input entity representing a spine-path may be distinguished from other input entities in input data by the user creating it in a given colour. A start-point of the spine-path may be marked as such by one or more predetermined symbols, for example a dot. An end-point of the spine-path may be marked as such by one or more predetermined symbols, for example an arrow. Such marking may be performed where the spine-path is created using a software application (such as an SVG drawing software application) that provides such a feature. Other measures for specifying the markers may also be used. A spine-path may be associated with a number of spine-path segments and/or spine-path segment points, which will be described in more detail below.
Referring to Figures 10A to 10D, various examples of spine-paths are shown. Referring to Figure 10A, a first example spine-path 1000 is a straight line-segment. Referring to Figure 10B, a second example spine-path 1001 is a curved line-segment. Referring to Figure 10C, a third example spine-path 1002 is in the shape of an ellipse.
Referring to Figure 10D, a fourth example spine-path 1003 is in the shape of a five-pointed star. A user may be able to specify and/or enable an indication of a start-marker and/or an end-marker of a spine-path. A start-marker and an end-marker may aid visual identification of the start-point and end-point of the spine-path respectively. A start-marker is depicted in relation to the spine-paths 1001 and 1002 by a dot. An end-marker is depicted in relation to the spine-paths 1001 and 1002 by an arrow. Other symbols could be used to designate a start-marker and/or end-marker. A spine-range-selector-path is a path associated with all or part of a spine-path. A spine-range-selector-path may, for example, be in the form of a two-dimensional stroke and/or two-dimensional shape. A spine-range-selector-path may be used to select one or more spine-path segment points along the length of the spine-path. The one or more spine-path segment points may be selected to be within a portion (or ‘range’) of the spine-path as indicated by the spine-range-selector-path. A spine-range-selector-path may be associated with a spine-path and a slice-path. One or more spine-range-selector-paths may be associated with a spine-path. A spine-range-selector-path may for example intersect a spine-path at one point, at two points or may completely enclose the spine-path without intersecting it. The point(s) of intersection of the spine-path with the spine-range-selector-path may be expressed in terms of a position of the point(s) of intersection along the spine-path relative to the total length of the spine-path. For example, the position may be expressed in terms of a percentage length of the spine-path. For example, a point of intersection half way along a spine-path may be expressed as a 50% length of the spine-path. The percentage length(s) may be stored as part of a data-structure representing the spine-range-selector-path. The location of one or more points of intersection could be expressed in a different way. A silhouette-outline-path associated with a three-dimensional geometry is an outline of a silhouette of the three-dimensional geometry as appearing from one or more of its side, front, rear, top, bottom or other orthographic views. A silhouette-outline-path may be processed along with slice-path information derived from a slice-path to deduce the three-dimensional geometry. A width-line is associated with a silhouette-outline-path. A width-line is a path within or intersecting the silhouette-outline-path. In some examples, a width-line has a number of width-line segments and/or width-line segment points, as will be described in more detail below. The number of width-line segments and/or width-line segment points may be the same as a number of spine-path segments and/or spine-path segment points of an associated spine-path. In some examples, construction lines drawn perpendicular to the path of the width-line and that pass through the width-line segment points are used to calculate the width of the three-dimensional geometry at that widthline segment point based on the intersection of the construction line with the silhouette-outline-path. A start-point and/or an end-point of the width-line may be marked by one or more given symbols. For example, a start-point may be indicated by a dot and/or an end-point may be indicated by an arrow. Such marking may be performed where the width-line is created using a software application (such as an SVG drawing software application) that provides such a feature. The marking may alternatively be performed in another way. A displacement-map is a map that may be used to control the roughness of at least part of the surface of the three-dimensional geometry. The displacement-map may comprise image data and/or matrix data. The displacement-map affects the depth of one or more vertices and one or more faces of the three-dimensional geometry in a part of the three-dimensional geometry associated with a corresponding area marked in the di splacement-map. A hole-path is a path that defines one or more holes to be created in the three-dimensional geometry along the direction of the spine-path. A hole-path may be defined within a slice-path. The hole-path enables creation of one or more Boolean geometries that may be used to create the one or more holes. A hole-map is a map that may be used to control the presence of one or more holes in one or more parts of the surface of the three-dimensional geometry. The hole-map may comprise image data and/or matrix data. The hole-map may be used to remove or move one or more vertices and one or more faces, or parts thereof, on the surface of the three-dimensional geometry associated with a corresponding area marked in the hole-map. A marker-map is a map that defines a location of one or more points on and/or inside the three-dimensional geometry. The marker-map may comprise image data and/or matrix data. A colour-map is a map that defines one or more colours to be assigned to all or part of one or more vertices and/or faces of the three-dimensional geometry corresponding to one or more areas marked in the colour-map. A spine-point is a location within a slice-path. The spine-point determines a point at which a spine-path passes through a slice-path during protrusion of the slice-path along the spine-path. A slice-marker represents a point inside or near a slice-path. A slice-marker may for example be in the form of a two-dimensional stroke and/or a two-dimensional shape. A slice-marker location may for example be defined with respect to a spine-point, a centroid of a slice-path or a custom location inside a slice-path. A slice-marker may for example be used when defining a marker-point. A spine-marker represents a point on a spine-path. A spine-marker may for example be in the form of a two-dimensional stroke and/or a two-dimensional shape. A measure related to a distance of the spine-marker along a spine-path may be determined. For example, the measure may be a percentage of the total length of the spine-path. A spine-marker may be used when defining a marker-point. A silhouette-marker represents a point on a silhouette-outline-path. A silhouette-marker may for example be in the form of a two-dimensional stroke and/or a two-dimensional shape. A silhouette-marker may be used when defining a marker-point. A z-path is associated with a spine-path. A z-path is a path traced by an associated spine-path in a third dimension. For example, if a given spine-path is constrained to an X-Y plane, a z-path associated with the given spine-path may be a view of the spine-path in an X-Z plane. A name-string is text data embedded in input data. A name-string may be used to assign a name and/or other identifier to one or more input entities in the input data. The name-string may be placed in a given position relative to the one or more input entities to be named. For example, the name-string may be placed close to one or more input entities and/or may intersect with one or more input entities to be named. The name-string may therefore be used to assign a name or other identifier to an associated input entity based at least in part on a location of the input entity relative to a location of the name-string. Other types of text data may be included in input data, for example for comments or other descriptive data.
An association-line is a line-segment that assists in associating one or more input entities with one or more other input entities. For example, when an association-line intersects a slice-path and a spine-range-selector-path, the slice-path may be associated with the spine-range-selector-path via the association-line.
An association-shape is a two-dimensional shape that may assist in associating one or more input entities with one or more other input entities. For example, when a given shape with one or more given properties encloses two input entities, they may be associated with each other via the association-shape.
An association-shape may alternatively or additionally assist in identifying a primitive type associated with a given input entity. For example, if an association-shape with one or more given properties encloses another input entity, the enclosed input may be identified as being associated with a particular primitive type via the association-shape. A named-path is a path with an associated name-string. The name-string may for example be placed near to the named-path in the input data. The named-path may be referenced using the name defined by the name-string. The named-path may be used by computer-readable code and/or script commands using the name defined by the name-string. Paths without associated name-strings may be automatically assigned names depending on the configuration of the input parser module 701. A rule is a computer-readable instruction in the input data. A spine-diagram is input data defining or in the form of a two-dimensional diagram comprising one or more input entities. in some examples, a spine-diagram provides sufficient input data for the three-dimensional model generator module 700 to create a three-dimensional geometry.
In some examples, a spine-diagram comprises at least two input entities. The at least two input entities may be associated with the spine-path and slice-path primitive types. In some examples, a spine-diagram comprises at least three input entities. The at least three input entities may be associated with the spine-diagram are spine-path, slice-path and association-line primitive types. In some examples, a spine-diagram comprises at least four input entities. The at least four input entities maybe associated with the spine-path, slice-path, spine-range-selector-path and association-line primitive types.
In some examples, a spine-diagram is constrained to a two-dimensional space.
One or more spine-diagrams may be provided in or be defined by the input data. For example, a single SVG file may comprise or define multiple spine-diagrams.
One or more images may be provided in, linked to in, or otherwise be associated with the input data. The one or more images may have embedded matrix data. For example, known SVG creation software applications can embed or link to image data. The one or more images may for example be a rendered preview of the desired resultant generated three-dimensional geometry. The one or more images may for example be processed as a displacement-map, marker-map and/or colour-map.
The input parser module 701 is configured to determine a primitive category associated with an input entity in the input data. The input parser module 701 may be configured to determine a primitive category for the input entity based at least in part on one or more categorisation criteria. For example, the input parser module 701 may be configured to identify a line-segment as belonging to the spine-path primitive category based at least in part on a stroke colour and/or stroke thickness of the line-segment. For example, a blue-coloured line-segment may be determined to be a spine-path. As such, in some examples, the input parser module 701 is configured to recognise primitive categories associated with input entities based at least in part on information in the input data.
The input parser module 701 may be configured to use various different categorisation criteria to facilitate categorisation of input entities.
Examples of visual categorisation criteria include, but are not limited to, a fill colour of the input entity, a stroke colour of the input entity, a boundary colour of the input entity, a boundary thickness of the input entity, a stroke thickness of the input entity and/or a background colour of the input entity.
Examples of other categorisation criteria include, but are not limited to, intersection with an input entity whose primitive category has already been determined, being enclosed by an input entity whose primitive category has already been determined, enclosing an input entity whose primitive category has already been determined, being enclosed by a predetermined shape (for example an ellipse), absolute or relative location of the input entity within the input data, intersection with or relative location to a name-string that identifies a primitive category of the input entity, intersection with or relative location to a text string with a prefix, suffix or other component that identifies a primitive category of the input entity, a property of the input entity that is modifiable by SVG creation software (for example a start-marker, end-marker and/or mid-marker-point) and/or two or more input entities being grouped together (for example in SVG data or an SVG file).
The input parser module 701 may be configured to assign a user-specified name to a primitive. An example of a factor that may assist with associating name text with a primitive is identifying name text comprising a prefix, suffix or sub-string associated with the primitive category of the primitive and also comprising a desired name of the primitive. Another example of a factor that may assist with associating name text with a primitive is a location, relative distance and/or bounding box of the name text relative to a location and/or bounding box of the input entity associated with the primitive. Another example of a factor that may assist with associating name text with a primitive is a specific property of the input entity associated with the primitive, for example, a start-point of a line-segment, where the input entity is or comprises the line-segment. Another example of a factor that may assist with associating name text with a primitive is a distance between name text and other input entities corresponding to the same type of primitive as that being named.
For example, a text string may be embedded in input data and placed in a location close to the input entity to which it should be assigned. The text string may have the intended name as a prefix, suffix and/or substring. By way of a non-limiting example, a text string SPINE.spl could be placed close to an input entity representing a spine-path in the input data. The input parser module 701 can then identify the spine-path closest to the text string and assign the name spl to that spine-path. By way of another non-limiting example, a text string PATH.pl could be placed in an SVG file such that the text string intersects a two-dimensional stroke. The input parser module 701, if configured accordingly, may then determine the type of the stroke as a named-path and assign the name pi to that named-path.
Once input entities have been assigned to respective primitive categories, the primitives may be associated with each other based on various different association factors. An example of an association factor is an association-line intersecting a spine-range-selector-path and a silhouette-outline-path. Another example of an association factor is an association-line intersecting a slice-path and a spine-range-selector-path. Another example of an association factor is an association-line intersecting a spine-range-selector-path and a z-path. Another example of an association factor is an association-line intersecting a spine-range-selector-path and a marker-map. Another example of an association factor is an association-line intersecting a spine-range-selector-path and a displacement-map. Another example of an association factor is an association-line intersecting a spine-range-selector-path and a hole-map. Another example of an association factor is an association-line intersecting a spine-range-selector-path and a colour-map. Another example of an association factor is an association-line intersecting a spine-range-selector-path and matrix data. Another example of an association factor is an association-line intersecting a spine-range-selector-path and image data. Another example of an association factor is an association-line grouped with another input entity type. Another example of an association factor is a spine-path being enclosed by and/or intersecting a spine-range-selector-path.
The input parser module 701 may be configured for example such that if a spine-range-selector-path encloses a spine-path, then the spine-path is associated with the spine-range-selector-path. The input parser module 701 may be configured for example such that if an association-line intersects both a silhouette-outline-path and a spine-range-selector-path, then the silhouette-outline-path is associated with a region of a spine-path associated with the spine-range-selector-path. The input parser module 701 may be configured for example such that if a silhouette-outline-path intersects a widthline, then the width-line is associated with the silhouette-outline-path.
The input parser module 701 may be configured to generate one or more data-structures as a parsed result of the input data. A data-structure comprises data derived from information relating to one or more input entities in the input data. A data-structure may be based at least in part on the primitive types identified in the input data and/or an association between input entities in the input data. In some examples, the data-structure has some or all of the following attributes: list of one or more spines (with each spine having a data-structure as outlined below), list of computer-readable instructions and list of named paths. A slice is a data-structure comprising data derived from one or more input entities representing one or more slice-paths. A slice may comprise data derived from one or more further input entities in the input data. A slice may comprise data defining, identifying and/or derived from and/or identifying a name or other identifier of the slice. A slice may comprise data defining, identifying and/or derived from a slice-path associated with the slice. A slice may comprise data defining, identifying and/or derived from one or more slice-markers associated with the slice. The one or more slice-markers may be identified by one or more name and/or location values. A slice may comprise data defining, identifying and/or derived from a spine-point associated with the slice. A slice may comprise data defining and/or identifying a number of points along the slice-path. The points may be referred to as boundary points or slice-path segment points. A slice may comprise data defining and/or identifying a location of one or more points along the slice-path. A spine range selector is a data-structure comprising data derived from one or more input entities representing one or more spine-range-selector-paths. A spine range selector may comprise data derived from one or more further input entities in the input data. A spine range selector may comprise data defining and/or identifying a name or other identifier of the spine range selector. A spine range selector may comprise data defining, identifying and/or derived from a spine-range-selector-path associated with the spine range selector. A spine range selector may comprise data defining and/or identifying a slice associated with the spine range selector. A spine range selector may comprise data defining, identifying and/or derived from a range of a spine-path that lies within the spine-range-selector-path associated with the spine range selector. The range may be defined in different ways, for example, in terms of a ratio, percentage and/or fraction. A silhouette is a data-structure comprising data derived from one or more input entities representing one or more silhouette-outline-paths. A silhouette may comprise data derived from one or more further input entities in the input data. A silhouette may comprise data defining and/or identifying a name or other identifier of the silhouette. A silhouette may comprise data defining, identifying and/or derived from one or more associated silhouette-outline-paths. A silhouette may comprise data defining, identifying and/or derived from one or more associated width-lines. A silhouette may comprise data defining and/or identifying one or more derived attributes. An example of a derived attribute is a location of one or more width-line segment points along a width-line. Another example of a derived attribute is a distance between intersections of a construction line passing perpendicularly through one or more width-line segment points of width-line with an associated silhouette-outline-path. A spine is a data-structure comprising data derived from one or more input entities representing one or more spine-paths. A spine may comprise data derived from one or more further input entities in the input data. A spine may comprise data defining and/or identifying a name or other identifier of the spine. A spine may comprise data defining, identifying and/or derived from one or more associated spine-paths. A spine may comprise data defining, identifying and/or derived from one or more spine-range-selector-paths associated with the spine. A spine may comprise data defining, identifying and/or derived from one or more silhouette-outline-paths associated with the spine. A spine may comprise data defining, identifying and/or derived from a length of one or more spine-paths associated with the spine. A spine may comprise data defining and/or identifying a radiation thickness percentage. A spine may comprise data defining, identifying and/or derived from a z-path associated with the spine. A spine may comprise data defining, identifying and/or derived from one or more rules executed on the spine. A spine may comprise data defining, identifying and/or derived from one or more displacement-maps associated with the spine. A spine may comprise data defining, identifying and/or derived from one or more marker-maps associated with the spine. A spine may comprise data defining, identifying and/or derived from one or more hole-maps associated with the spine. A spine may comprise data defining, identifying and/or derived from one or more colour-maps associated with the spine. A spine may comprise data defining, identifying and/or derived from one or more spine-path segment points of one or more spine-paths associated with the spine. A spine may comprise data defining and/or identifying a location of one or more spine-path segment points along one or more spine-paths associated with the spine. A spine may comprise data defining and/or identifying a number of boundary points of one or more slice-paths associated with the spine. A spine may comprise data defining and/or identifying one or more marker-points associated with the spine. A spine may comprise data defining and/or identifying one or more marker-vectors associated with the spine. A spine may comprise data defining and/or identifying one or more marker-planes associated with the spine. A spine may comprise data defining and/or identifying one or more marker-polygons associated with the spine. A spine may comprise data defining and/or identifying one or more spine-markers associated with the spine. A spine may comprise data defining and/or identifying a generated three-dimensional geometry of the spine. The data identifying and/or defining a generated three-dimensional geometry of the spine may comprise data identifying and/or defining one or more vertices and/or one or more faces of the generated three-dimensional geometry. A spine may comprise data defining and/or identifying one or more user-defined attributes. A spine may comprise data defining and/or identifying one or more other attributes derived from input data. A non-spine is a data-structure comprising data derived from one or more spines. A non-spine may comprise data identifying and/or defining an identification string or number associated with the non-spine. The identification string or number may be unique within a given project or globally. A non-spine may comprise data identifying and/or defining from a name of the non-spine. A non-spine may comprise data identifying and/or defining one or more constituent spines of the non-spine. A nonspine may comprise data identifying and/or defining one or more constituent non-spines of the spine. A non-spine may comprise data defining, identifying and/or derived from one or more marker-points associated with the non-spine. A non-spine may comprise data identifying and/or defining one or more marker-vectors associated with the nonspine. A non-spine may comprise data identifying and/or defining one or more marker-planes associated with the non-spine. A non-spine may comprise data identifying and/or defining one or more marker-polygons associated with the non-spine. A nonspine may comprise data identifying and/or defining one or more inherited marker-points from one or more constituent spines of the non-spine. A non-spine may comprise data identifying and/or defining one or more inherited marker-vectors from one or more constituent spines of the non-spine. A non-spine may comprise data identifying and/or defining one or more inherited marker-planes from one or more constituent spines of the non-spine. A non-spine may comprise data identifying and/or defining one or more inherited marker-polygons from one or more constituent spines of the non-spine. A non-spine may comprise data identifying and/or defining from one or more rules executed to create the non-spine from one or more constituent spines of the non-spine. A non-spine may comprise data identifying and/or defining one or more rules executed to modify the non-spine after creation of the non-spine. A non-spine may comprise data identifying and/or defining a generated three-dimensional geometry of the nonspine. The data identifying and/or defining a generated three-dimensional geometry of the non-spine may comprise data identifying and/or defining one or more vertices and/or one or more faces of the generated three-dimensional geometry. A non-spine may comprise data identifying and/or defining one or more user-defined attributes associated with the non-spine. A non-spine may comprise data identifying and/or defining one or more other attributes derived from input data.
The term ‘constituent spine’ (or ‘reference spine’) is used herein to mean a spine used in creating a non-spine. The term ‘constituent non-spine’ (or ‘reference nonspine’) is used herein to denote a non-spine used in creating a non-spine. Data related to a constituent spine and/or constituent non-spine may be included in a non-spine data-structure. Alternatively or additionally, the non-spine data-structure may comprise a link to such data, for example in a local storage device and/or network storage device. A spine may be created using data derived from input data, for example when at least one spine-path and at least one slice-path is present in the input data. A spine may be created by breaking-down (or ‘decomposing’) a non-spine to its constituent parts, where at least one of the constituent parts is a spine. On recursive decomposition of all non-spines under a non-spine hierarchy, one or more constituent spines may be obtained. A spine may be created by duplicating an existing spine. A non-spine may be created by direct data-structure conversion from a spine to a non-spine. A non-spine may be created by executing one or more rules on one or more spines. A non-spine may be created by executing one or more rules on a combination of one or more spines and one or more non-spines. A non-spine may be created by executing one or more rules on one or more non-spines. A non-spine may be created by reconstructing a non-spine after changing one or more attributes of one or more of its constituent spines and/or consistent non-spines. A non-spine may be created by grouping multiple spines together. A non-spine may be created by grouping a combination of one or more spines and one or more non-spines. A non-spine may be created by grouping multiple non-spines together. A non-spine may be created by duplicating an existing non-spine.
Both spines and non-spines may have an associated generated three-dimensional geometry. The generated three-dimensional geometry is used to generate the three-dimensional model. The three-dimensional model may be exported, for example to a 3D-printable format.
Referring to Figure 11, there is shown an illustration of example spine and nonspine state transitions. A spine may change its state to non-spine. In such cases, the spine may lose at least some of its attributes and/or the ability to be influenced by one or more spine-specific rules.
As indicated at 1100, a spine may remain as a spine following modification of one or more spine parameters. An example of such a spine parameter is a name of the spine.
As indicated at 1101 and 1102, one more spines may change their state to nonspine following modification of one or more given parameters. For example, a 'Boolean union' type constructive solid geometry modification that combines two spines may produce a non-spine.
As indicated at 1103, a non-spine may remain as a non-spine following reconstructions after modification of one or more attributes of the non-spine or one or more components of the non-spine. For example, when a non-spine is subject to the 'rotation' modification operation that rotates it in three-dimensional space, it retains its non-spine state.
As indicated at 1104, one or more spines and/or one or more non-spines may change their state to non-spine following modification. For example, a 'Boolean union' type constructive solid geometry modification that combines a spine and a non-spine may produce a non-spine.
As indicated at 1105, a non-spine may transition to one or more spines and/or one or more non-spines following decomposition of the non-spine and/or export of one or more sub-objects of the non-spine.
It may be possible to edit, add and/or remove rules selectively from the list of rules in the creation history metadata of spines and non-spines. This may force a regeneration (or ‘reconstruction’) of a three-dimensional geometry by evaluating a new set of one or more rules following a state change.
The resolution of a spine may be adjusted by varying the number and/or location of one or more spine-path segment points along a spine-path associated with the spine. The resolution of a spine may be adjusted by varying the number and/or location of one or more boundary points of a slice-path associated with the spine. Such adjustment of the resolution, referred to herein as ‘shape-preserving resolution-adjustment’, preserves the intended shape of the spine as defined by its attributes.
To adjust the resolution of a non-spine, an initial step may be to regenerate one or more consistent spines of the non-spine with the desired resolution. Then, the nonspine may be reconstructed, for example by running the rules derived from or listed in its creation history metadata. Other methods that may be used to adjust the resolution of a non-spine include, but are not limited to, mesh smoothing, mesh reduction and other methods involving manipulation of vertices and faces. A marker-point may be created for a spine by providing an expression that is evaluated using factors that include spine-path data associated with the spine. A nonspine comprises one or more spines and may not have a direct spine-path attribute itself. As such, it may not be possible to create a marker-point for a non-spine using an expression that references a spine-path. A non-spine may be seen to comprise a hierarchical combination of one or more spines and, in some cases, one or more non-spines. A non-spine may be decomposable into constituent parts. A decompose operation may be run for example if a user wishes to modify a non-spine. This may place the constituent parts of the non-spine in the three-dimensional workspace. Desired changes may be made to one or more of the constituent parts. Rules in the creation history metadata of the non-spine may be reevaluated following the change(s) to the constituent part(s). A reconstructed non-spine can be generated accordingly.
Some rules may be restricted to work only on spines. Some modification rules may be restricted to work only on non-spines.
Referring to Figure 12, there is shown a flowchart showing an example workflow of an input parser module 701. At 1200, the input parser module 701 identifies one or more primitive types of one or more input entities in and/or defined by the input data. At 1201, the input parser module 701 associates one or more names, if provided in the input data, with one or more associated primitives. At 1202, the input parser module 701 detects one or more associations between one or more primitives. At 1203, the input parser module 701 identifies one or more valid primitive types and creates one or more corresponding data-structures.
Referring again to Figure 7, the geometry generator module 702 is configured to obtain the one or more data-structures from the input parser module 701. The geometry generator module 702 is configured to process data in the one or more data-structures and produce the three-dimensional geometry. The three-dimensional geometry may be in the form of vertices and one or more faces. Vertices are points in three-dimensional space. Faces are two-dimensional planes bounded by edges formed by joining selected vertices. In some examples, one or more input entities included in the input data received by the input parser module 701 are not visible in the three-dimensional geometry generated by the geometry generator module 702 and displayed to a user. For example, a spine-path used in the construction of the three-dimensional geometry may not be visible in the three-dimensional geometry itself.
Referring to Figure 13, there is shown an example of acts performed by a geometry generator module 702.
At 1301, the geometry generator module 702 performs silhouette processing, if applicable. Silhouette processing may be applicable if the input data to the geometry generator module 702 comprises data relating to a silhouette-outline-path, for example if it comprises a silhouette data-structure. Silhouette processing may not be applicable if the input data does not include data relating to a silhouette-outline-path.
In some examples, the geometry generator module 702 is configured to perform silhouette processing in relation to one or more silhouette-outline-paths. In some examples, silhouette processing is performed only when silhouette-outline-path data and width-line data are available to the geometry generator module 702. In some examples, silhouette processing comprises identifying a number of width-line segments of the width-line. The number of width-line segments may be the same as the number of spine-path segments of an associated spine-path. The location of width-line segment points at intervals (for example equal intervals) along the width-line are calculated. At each such width-line segment point, a construction line perpendicular to the path of the width-line at that width-line segment point is constructed. The intersection points of a given perpendicular construction line with the silhouette-outline-path are determined. The distance between the intersection points of the given perpendicular construction line is calculated. This distance is referred to hereinafter as ‘width-at-segment-point’. The width-at-segment-point at an intersection between the width-line and the silhouette-outline-path may be set as zero. This may, for example, be based on user choice.
At 1302, the geometry generator module 702 performs slice processing in relation to a slice-path. In some examples, slice processing comprises identifying a number of slice-path segment points along a slice-path. The slice-path segment points may, for example, be at equal intervals along the slice-path. These slice-path segment points are referred to hereinafter as 'boundary points'. The number of boundary points may depend on user choice. The number of boundary points may be automatically calculated as a number that is proportional to the length of the slice-path. If a spine-point is not associated with the slice-path, the geometry generator module 702 may be configured to designate a given point relative to the boundary points as a default spine-point. An example of such a given point is a centroid of the boundary points. When scaling the slice-path, the spine-point may be considered as a pivot point.
At 1303, the geometry generator module 702 performs spine processing in relation to a spine-path and creates a three-dimensional geometry.
In some examples, spine processing comprises identifying a number of spine-path segment points along a spine-path. The spine-path segment points may, for example, be at equal intervals along the spine-path. The number of spine-path segment points may depend on user choice. The number of spine-path segment points may be automatically calculated as a number that is proportional to the length of the spine-path. If silhouette-outline-path data is present, and silhouette processing is applied, then each spine-path segment point may be associated with corresponding width-at-segment-point data.
For a spine-range-selector-path associated with a spine-path, one or more corresponding slices are determined. As a spine-range-selector-path covers a range of the length of the spine-path (which may be the full length of the spine-path), the calculated spine-path segment points are associated with one or more corresponding spine-range-selector-paths. As such, a spine-path segment point is associated with the slice associated with a spine-range-selector-path.
If no spine-range-selector-path covers a given part of the spine-path, there may be one or more spine-path segment points without any associated slices. In such cases, one or more temporary (or ‘dynamically generated’) slices may be constructed by interpolating one or more slices nearest to the spine-path segment point(s) in question.
For a spine-path segment point, the corresponding slice is positioned such that the spine-point of the slice is in the same location as the corresponding spine-path segment point. A plane formed by the boundary points of the slice (hereinafter referred to as a 'slice plane') is positioned perpendicular to the direction of the spine-path at the corresponding spine-path segment point.
The slice plane is scaled such that its size (viewed in the same angle as that of the silhouette-outline-path) matches that of the width-at-segment-point of the spine-path segment point.
If a z-path is associated with the spine, then the z-path may be split into a number of z-path segments. The number of z-path segments may be the same as the number of spine-path segments of an associated spine-path. The locations of points along the z-path are calculated. These points along the z-path are hereinafter referred to as ‘z-path segment points’. For each such z-path segment point, a displacement value in the third dimension is calculated. The displacement value is implied by the relative location of the z-path segment points, for example with respect to a start-point of the z-path. This displacement value is then applied to spine-path segment points. A check is made that the slice plane is perpendicular to the now three-dimensional spine at the spine-path segment points.
One example of a special case is a closed spine-path, where the start-point and end-point of the spine-path are the same. In such a case, the three-dimensional geometry is continuous and does not have to be closed at either end. This may be used, for example, to generate a torus shape.
Another example of a special case is where a user chooses a specific thickness value or percentage value for the spine. In such a case, the vertices are scaled accordingly while constructing the three-dimensional geometry.
Another example of a special case is where a user provides displacement data, for example in the form of a displacement-map. In such a case, the corresponding vertices are adjusted by adding the desired displacement along a direction normal to the affected vertices. A midpoint of the spine may be set as a pivot point for three-dimensional geometry operations. Examples of three-dimensional geometry operations include, but are not limited to, rotation and scale transformations in three-dimensional space.
Referring to Figures 14A to 14L, there is illustrated an example of a method of generating a three-dimensional geometry from input data. The method may be performed by the three-dimensional model generator module 700. In this example, the input data is in the form of a spine-diagram 1400. In this example, the spine-diagram 1400 defines a plurality of line-segments that are completely contained within a two-dimensional space.
Figure 14A shows a spine-diagram 1400. The spine-diagram 1400 includes a spine-path 1401, a spine-range-selector-path 1402, an association-line 1403 and a slice-path 1404. The spine-path 1401 is a straight line-segment. The slice-path 1404 is circular.
Figures 14B to 14E show various acts involved in categorising the different input entities in the spine-diagram 1400. These acts may be performed by the input parser module 702.
As depicted in Figure 14B, input entity 1401 is identified as a spine-path. In this example, this is because input entity 1401 has a relatively high thickness compared to that of other input entities in the spine-diagram 1400.
As depicted in Figure 14C, input entity 1402 is identified as a spine-range-selector-path. In this example, this is because input entity 1402 encloses input entity 1401, which has already been identified as a spine-path.
As depicted in Figure 14D, input entity 1403 is identified as an association-line. In this example, this is because input entity 1403 intersects input entity 1402, which has already been identified as a spine-range-selector-path.
As depicted in Figure 14E, input entity 1404 is identified as a slice-path. In this example, this is because input entity 1404 intersects input entity 1403, which has already been identified as an association-line, and nothing else.
Figure 14F depicts slice processing in relation to the circular slice-path 1404. Slice processing may be performed by the geometry generator module 702. The centroid 1405 of the slice-path 1404 is determined to be the spine-point. Four boundary points 1406, 1407, 1408 and 1409 are identified along the slice-path 1404 at equal intervals. The boundary points 1406, 1407, 1408 and 1409 may be calculated as such. The user may be able to specify the number of boundary points to be calculated.
Figures 14G to 14K show various acts involved in spine processing in relation to the spine-path 1401, and generation of a corresponding three-dimensional geometry. Spine processing and geometry generation may be performed by the geometry generator module 702.
As depicted in Figure 14G, the spine-path 1401 is divided into two equal spine-path segments. Three equidistant spine-path segment points 1410, 1411, 1412 are identified and/or marked along the spine-path 1401. In this example, spine-path segment point 1410 corresponds to a start-point of the spine-path 1401. In this example, spine-path segment point 1411 corresponds to a mid-point of the spine-path 1401. In this example, spine-path segment point 1412 corresponds to an end-point of the spine-path 1401.
As depicted in Figure 14H, a copy of the slice-path 1404 is placed at each of the spine-path segment points 1410, 1411, 1412 such that the plane of the slice-path 1404 is perpendicular to the direction of the spine-path 1401 at each of the spine-path segment points 1410, 1411, 1412. The slice-path placement is such that the spine-path 1401 passes through the spine-point 1405 of each of the slice-paths 1404. The spine-path 1401 could be thought of as lying in an X-Y plane and the plane formed by the slice-path boundary points 1406, 1407, 1408, 1409 of each slice-path 1404 could be thought of as being parallel to an X-Z plane.
As depicted in Figure 141, the slice-path boundary points 1406,1407,1408,1409 of each slice-path 1404 are joined with corresponding slice-path boundary points 1406, 1407, 1408, 1409 of an adjacent slice-path 1404.
As depicted in Figure 14J, the faces of the three-dimensional geometry are created by joining the relevant vertices, which correspond to the slice-path boundary points 1406, 1407, 1408 and 1409.
Figure 14K depicts a wireframe view 1413 and a shaded view 1414 of the three-dimensional geometry created by the spine-diagram 1400, where there are two spine-path segments, and hence three spine-path segment points 1410, 1411, 1412, in the spine-path 1401, and four boundary points 1406, 1407, 1408, 1409 in the slice-path 1404.
Figure 14L depicts a wireframe view 1415 and a shaded view 1416 of the three-dimensional geometry created by the spine-diagram 1400, where the number of boundary points of the slice-path 1404 is increased from four to twelve. This increase in the number of boundary points results in an increased number of vertices and faces. This results in a more cylindrical shape. A shape with a desired level of correspondence to a perfect cylinder may be achieved by increasing the number of boundary points and/or spine-path segment points to a level of detail required.
Figures 14K and 14L show the generated three-dimensional geometry as having faces containing three edges each. This is achieved by converting the four-edged faces in Figure 14J to three-edged-faces, without any change to the location of the vertices.
Referring to Figures 15A to 15M, there is illustrated an example of a method of generating a three-dimensional geometry from input data. The method may be performed by the three-dimensional model generator module 700. In this example, the input data is in the form of a spine-diagram 1500. In this example, the spine-diagram 1500 defines a plurality of line-segments that are completely contained within a two-dimensional space.
Figure 15A shows a spine-diagram 1500. The spine-diagram 1500 includes a spine-path 1501, a spine-range-selector-path 1502, a first association-line 1503, a second association-line 1504, a slice-path 1505, a silhouette-outline-path 1506 and a width-line 1507. The spine-path 1501 is a straight line-segment. The slice-path 1505 is circular. The silhouette-outline-path 1506 is triangular.
Figure 15B depicts slice processing in relation to the circular slice-path 1505. Slice processing may be performed by the geometry generator module 702. The centroid 1508 of the slice-path 1505 is determined to be the spine-point. Four boundary points 1509, 1510, 1511 and 1512 are identified along the slice-path 1505 at equal intervals. The user may be able to specify the number of boundary points.
Figures 15C to 15G show various acts involved in categorising the different input entities in the spine-diagram 1500. These acts may be performed by the input parser module 701.
As depicted in Figure 15C, input entity 1501 is identified as a spine-path. In this example, this is because input entity 1501 has a relatively high thickness compared to that of other input entities in the spine-diagram 1500.
As depicted in Figure 15D, input entity 1502 is identified as a spine-range-selector-path. In this example, this is because input entity 1502 encloses input entity 1501, which has already been identified as a spine-path 1501.
As depicted in Figure 15E, input entities 1503 and 1504 are identified as association-lines. In this example, this is because input entities 1503 and 1504 intersect input entity 1502, which has already been identified as a spine-range-selector-path 1502.
As depicted in Figures 15F and 15G, input entity 1505 is identified as a slice-path. In this example, this is because input entity 1505 intersects input entity 1503, which has been identified as an association-line, and nothing else. Input entity 1506 is identified as a silhouette-outline-path. In this example, this is because input entity 1506 intersects input entity 1504, which has been identified as an association-line, and input entity 1507. Input entity 1507 has not yet been categorised. However, since input entity 1507 intersects input entity 1506, which has already been identified as a silhouette-outline-path, input entity 1507 is identified as a width-line.
Figures 15H to 15M show various acts involved in spine processing in relation to the straight spine-path 1501 and subsequent geometry generation. Spine processing and geometry generation may be performed by the geometry generator module 702.
As depicted in Figure 15H, the spine-path 1501 is divided into two equal spine-path segments. Three equidistant spine-path segment points 1513, 1514, 1515 are identified and/or marked along the spine-path 1501.
Figures 151 and 15J illustrate silhouette processing. Silhouette processing may be performed by the geometry generator module 702. In this example, silhouette processing involves a width-extraction procedure. The width-line 1507 is snipped at the points 1516, 1517 at which the width-line 1507 intersects the silhouette-outline-path 1506. The snipped width-line is then divided into two equal width-line segments. This results in three width-line segment points 1516, 1519, 1517. Dashed construction lines 1521, 1522, 1523 are drawn through the width-line segment points 1516, 1519, 1517 respectively such that they are perpendicular to the width-line 1507 at the widthline segment points 1516, 1519, 1517. The points at which the dashed construction lines 1521, 1522, 1523 intersect with the silhouette-outline-path 1506 are noted. These intersection points are 1524 for the first dashed line-segment 1521; 1525 and 1526 for the second dashed line-segment 1522; and 1527 and 1528 for the third dashed line-segment 1523. In this example, an ‘allow zero width’ flag has been set in the geometry generator module 702. This allows the width-at-segment-point width data to be zero. The extracted width data may be expressed as three pairs of values, namely: {distance(l 524,1524), distance(l 525,1526), distance( 1527,1528)}, where distance(a,b) means the distance between a and b. In this example, distance} 1524,1524)=0.
As depicted in Figure 15K, a copy of the slice-path 1505 is placed at each of the spine-path segment points 1513, 1514, 1515 such that the plane of the slice-path 1505 is perpendicular to the direction of the spine-path 1501 at each spine-path segment point 1513, 1514, 1515. The slice-path placement is such that the spine-path 1501 passes through the spine-point 1508 of each of the slice-paths 1505. The spine-path 1501 could be thought of as lying in an X-Y plane. The plane formed by the slice-path boundary points 1509, 1510, 1511, 1512 of each slice-path 1505 could be thought of as being parallel to an X-Z plane.
Returning again to Figure 15J, the slice-paths 1505 are scaled based at least in part on the extracted width data. The first slice-path 1505 is scaled to zero. This is based at least in part on distance} 1524,1524)=0. The second and third slice-paths 1505 are scaled proportional to distance} 1525,1526) and distance(l527,1528) respectively. Figure 15J also depicts how the faces of the three-dimensional geometry are created by joining the relevant vertices (marked by crosses).
Figure 15L depicts a wireframe view 1529 and a shaded view 1530 of the three-dimensional geometry created by the spine-diagram 1500 where there are two segments, and hence three spine-path segment points 1513, 1514, 1515, in the spine-path 1501, and four boundary points 1509, 1510, 1511, 1512 in the slice-path 1505.
Figure 15M depicts a wireframe view 1531 and a shaded view 1532 of the three-dimensional geometry created by the spine-diagram 1500 where the number of boundary points of the slice-path 1505 is increased from four to twelve. This increase in the number of boundary points results in an increased number of vertices and faces. This results in a more conical shape. A shape with a desired level of correspondence to a perfect cone may be achieved by increasing the number of boundary points and/or spine-path segment points to the level of detail required.
Figures 15L and 15M show the generated three-dimensional geometry as having faces containing three edges each. This is achieved by converting the four-edged faces in Figure 15J to three-edged-faces, without any change to the location of the vertices.
In other configurations of the three-dimensional model generator module 700, this triangulation may not be performed.
Referring to Figures 16 to 23, there are shown various examples of usage of spine-range-selector-paths in input data.
Figure 16 shows an example of a spine-diagram 1600 and corresponding three-dimensional geometry. A spine-path 1601 that starts at 1602 and ends at 1603, as indicated by a dot marker and an arrow marker respectively, is fully enclosed by a spine-range-selector-path 1604. The spine-path 1601 is slightly curved. The spine-diagram 1600 includes a circular slice-path 1605. An association-line 1606 intersects the spine-range-selector-path 1604 and the circular slice-path 1605. As the spine-range-selector-path 1604 fully encloses (or ‘surrounds’) the spine-path 1601, the circular slice-path 1605 is applicable as a cross-section throughout the entire length of the spine-path 1601. A wireframe view 1607 and a shaded view 1608 of the three-dimensional geometry generated based on spine-diagram 1600 are shown.
Figure 17 shows an example of a spine-diagram 1700 and corresponding three-dimensional geometry. A spine-path 1701 starts at 1702 and ends at 1703. A spine-range-selector-path 1704 encloses only part of the spine-path 1701, from start-point 1702 to an intersection point 1705 approximately half way along the spine-path 1701. The spine-diagram 1700 includes a circular slice-path 1706. An association-line 1707 intersects the spine-range-selector-path 1704 and the circular slice-path 1706. The slice-path 1706 is applicable as a cross-section for the spine-path 1701 from start-point 1702 to intersection point 1705. For the part of the spine-path 1701 from intersection point 1705 to end-point 1703, the nearest one or more slice-paths are identified and used in an interpolation procedure. In spine-diagram 1700, the only slice-path available is the circular slice-path 1706. As such, the circular slice-path 1706 is assigned to the part of the spine-path 1701 between intersection point 1705 and end-point 1703. The generated three-dimensional geometry (wireframe view 1708 and shaded view 1709) are therefore the same as those in Figure 16.
Figure 18 shows an example of a spine-diagram 1800 and corresponding three-dimensional geometry. A spine-path 1801 starts at start-point 1802 and ends at endpoint 1803. A first spine-range-selector-path 1804 encloses (or ‘covers’) the region of the first spine-path 1801 from the start-point 1802 to a first intersection point 1805. A second spine-range-selector-path 1806 covers the region of the spine-path 1801 from a second intersection point 1807 to the end-point 1803. A first, circular slice-path 1808 and a second, square slice-path 1809 are associated with the first and second spine-range-selector-paths 1804, 1806 respectively via first and second association-lines 1810, 1811 respectively. During geometry creation, the first, circular slice-path 1808 is assigned to the part of the spine-path 1801 from the start-point 1802 to the first intersection point 1805 and the second, square slice-path 1809 is assigned to the part of the spine-path 1801 from the second intersection point 1807 to the end-point 1803. The part of the spine-path 1801 between the first intersection point 1805 and the second intersection point 1807 is not enclosed by a spine-range-selector-path. As such, it does not have any direct slice-path assignments. However, the part of the spine-path 1801 between the first intersection point 1805 and the second intersection point 1807 may be assigned one or more dynamically generated slice-paths. In this example, the dynamically generated slice-path is a slice-path interpolated between the slice-paths on either side of the part of the spine-path 1801 between the first intersection point 1805 and the second intersection point 1807. The first intersection point 1805 gets the most circle-like dynamically generated slice-path and second intersection point 1807 gets the most square-like slice-path. The points in between the first intersection point 1805 and the second intersection point 1807 have interpolated slice-paths based on their relative location. Thus, the generated geometry has a circular cross-section at one end, a square cross-section at the other end and a gradual transition in cross-section in the region from the first intersection point 1805 to the second intersection point 1807. A wireframe view 1812 and shaded view 1813 of the generated three-dimensional geometry are shown.
Figure 19 shows an example of a spine-diagram 1900 and corresponding three-dimensional geometry. The spine-diagram 1900 includes three input entities corresponding to slice-paths. The slice-paths are a first, circular slice-path 1901, a second, triangular slice-path 1902 and a third, square slice-path 1903. This implies that the generated three-dimensional geometry will have three different cross-sections in three different regions.
The first, circular slice-path 1901 is associated with a first spine-range-selector-path 1904 via a first association-line 1905. The first spine-range-selector-path 1904 encloses the start-point 1906 of a spine-path 1907 and intersects the spine-path 1907 at a first intersection point 1908. Spine-path segment points in the range of the spine-path 1907 from the start-point 1906 of the spine-path 1907 to the first intersection point 1908 are assigned the circular slice-path 1901
The second, triangular slice-path 1902 is associated with a second spine-range-selector-path 1909 via a second association-line 1910. The second spine-range-selector-path 1909 intersects the spine-path 1907 at a second intersection point 1911 and a third intersection point 1916. Spine-path segment points in the range of the spine-path 1907 from the second intersection point 1911 to the third intersection point 1916 are assigned the triangular slice-path 1902.
The third, square slice-path 1903 is associated with a third spine-range-selector-path 1912 via a third association-line 1913. The second spine-range-selector-path 1912 encloses an end-point 1914 of the spine-path 1907 and intersects the spine-path 1907 at a fourth intersection point 1915. The spine-path segment points in the range of the spine-path from the fourth intersection point 1915 to the end-point 1914 are assigned the square slice-path 1903.
Spine-path segment points from the first intersection point 1908 to the second intersection point 1911 are assigned dynamically generated slice-paths which are interpolated values between the circular and triangular slice-paths 1901, 1902. Spine-path segment points from the third intersection point 1916 to the fourth intersection point 1915 are assigned dynamically generated slice-paths which are interpolated values between the triangular and square slice-paths 1902, 1903.
Thus, the generated geometry has a circular cross-section at one end, gradually transitioning to a triangular cross-section and then gradually transitioning to a square cross-section. A wireframe view 1917 and shaded view 1918 of the generated three-dimensional geometry are shown.
Figure 20 shows an example of a spine-diagram 2000 and corresponding three-dimensional geometry. A spine-path 2001 has a start-point 2002 and an end-point 2003. A first spine-range-selector-path 2004 encloses the spine-path 2001 in a region from the start-point 2002 to an intersection point 2005. A second spine-range-selector-path 2006 encloses the spine-path 2001 in a region from the intersection point 2005 to the end-point 2003. The first and second spine-range-selector-paths 2004, 2006 both intersect the spine-path 2001 at the same intersection point 2005. The first and second spine-range-selector-paths 2004, 2006 are associated with a first, circular slice-path 2007 and a second, square slice-path 2008 respectively via first and second associationlines 2009, 2010 respectively. Therefore, the generated geometry has a circular cross-section for spine-path segment points from the start-point 2002 to the intersection point 2005 and abruptly transitions to a square cross-section from the intersection point 2005 to the end-point 2003. A wireframe view 2011 and a shaded view 2012 of the generated three-dimensional geometry are shown.
Figure 21 shows an example of a spine-diagram 2100 and corresponding three-dimensional geometry. A spine-path 2101 has a start-point 2102 and an end-point 2103. A first spine-range-selector-path 2104 encloses the spine-path 2101 in the region from the start-point 2102 to a first intersection point 2105. A second spine-range-selector-path 2106 encloses the spine-path 2101 in the region from a second intersection point 2107 to the end-point 2103. The first and second spine-range-selector-paths 2104, 2106 intersect the spine-path 2101 at the first and second intersection points 2105, 2107 respectively. The first and second spine-range-selector-paths 2104, 2106 are associated with a first, circular slice-path 2108 and a second, square slice-path 2109 respectively via first and second association-lines 2110, 2111 respectively. In this case, spine-path segment points in the region of the spine-path from the second intersection point 2107 to the first intersection point 2105 have two possible choices for slice-paths. Based, for example, on a configuration of the input parser module 701 and/or geometry generator module 702, one slice-path could take precedence over the other. For example, as shown in Figure 21, the input parser module 701 and/or geometry generator module 702 has been configured such that the spine-range-selector-path which covers the smaller range of the spine-path takes precedence. Therefore, the generated geometry has a circular cross-section for spine-path segment points from the start-point 2012 to the first intersection point 2105 and abruptly transitions to a square cross-section from the first intersection point 2105 to the end-point 2103. A wireframe view 2112 and shaded view 2113 of the generated three-dimensional geometry are shown.
Figure 22 shows an example of a spine-diagram 2200 and corresponding three-dimensional geometry. A spine-path 2201 has a start-point 2202 and an end-point 2203. A first spine-range-selector-path 2204 encloses the spine-path 2201 in the region from the start-point 2202 to a first intersection point 2205. A second spine-range-selector-path 2206 fully encloses the spine-path 2201 from the start-point 2202 to the end-point 2203. The first and second spine-range-selector-paths 2204, 2206 are associated with a first, circular slice-path 2207 and a second, square slice-path 2208 respectively via first and second association-lines 2209, 2210 respectively. In this case, spine-path segment points in the region of the spine-path from the start-point 2202 to the first intersection point 2205 have two possible choices for slice-paths. Based, for example, on a configuration of the input parser module 701 and/or geometry generator module 702, one slice-path could take precedence over the other. For example, as shown in Figure 22, the input parser module 701 and/or geometry generator module 702 is configured such that the spine-range-selector-path which covers the smaller range of the spine-path 2201 should take precedence. Therefore the generated geometry has a circular cross-section for spine-path segment points from the start-point 2202 to the first intersection point 2205 and abruptly transitions to a square cross-section from the first intersection point 2205 to the end-point 2203. A wireframe view 2211 and a shaded view 2212 of the generated three-dimensional geometry are shown.
Figure 23 shows an example of a spine-diagram 2300 and corresponding three-dimensional geometry. A spine-path 2301 has a start-point 2302 and an end-point 2303. A first spine-range-selector-path 2304 encloses the spine-path 2301 in the region from the start-point 2302 to a first intersection point 2305. A second spine-range-selector-path 2306 fully encloses the spine-path 2301 from the start-point 2302 to the end-point 2303. A third spine-range-selector-path 2307 encloses the spine-path 2301 in the region from a second intersection point 2308 to the end-point 2303. The first, second and third spine-range-selector-paths 2304, 2306, 2307 are associated with a first, circular slice-path 2309, a second, square slice-path 2310 and a third, triangular slice-path 2311 respectively via first, second and third association-lines 2312, 2313, 2314 respectively.
In this case, spine-path segment points in the region of the spine-path 2301 from the start-point 2302 to the first intersection point 2305 have two possible choices for slice-paths, namely the first, circular slice-path 2309 or the second, square slice-path 2310. Spine-path segment points in the region of the spine-path 2301 from the second intersection point 2308 to the end-point 2303 also have two possible choices for slice-paths, namely the second, square slice-path 2310 or the third, triangular slice-path 2311. Based for example on a configuration of the input parser module 701 and/or geometry generator module 702, one slice-path could take precedence over the other. For example, as shown in Figure 23, the input parser module 701 and/or geometry generator module 702 is configured such that the spine-range-selector-path which covers the smaller range of the spine-path 2301 should take precedence. Therefore, the generated geometry has a circular cross-section for spine-path segment points from the start-point 2302 to the first intersection point 2305, abruptly transitions to a square cross-section from the first intersection point 2305 to the second intersection point 2308, and again abruptly transitions to a triangular cross-section from the second intersection point 2308 to the end-point 2303. A wireframe view 2315 and a shaded view 2316 of the generated three-dimensional geometry are shown.
Referring to Figures 24A to 241, there are shown various examples of silhouette-outline-paths and associated width-lines. In these examples, all of the width-lines have a start-marker, indicated by a dot, and an end-marker, indicated by an arrow. In these examples, the silhouette-outline-paths are all closed paths.
Figure 24A shows an example of a silhouette-outline-path 2400 and an associated width-line 2401. The silhouette-outline-path 2400 is in the shape of a vase. The width-line 2401 is a straight line-segment. The width-line 2401 starts on the silhouette-outline-path 2400 and ends within the silhouette-outline-path 2400.
Figure 24B shows a width-line 2402 that starts within a silhouette-outline-path 2403 and ends on the silhouette-outline-path 2403. The silhouette-outline-path 2403 is in the shape of a vase. The width-line 2402 is a straight line-segment.
Figure 24C shows a width-line 2404 that starts within a silhouette-outline-path 2405 and ends within the silhouette-outline-path 2405. The silhouette-outline-path 2405 is in the shape of a vase. The width-line 2404 is a straight line-segment.
Figure 24D shows a width-line 2406 that starts outside a silhouette-outline-path 2407 and ends outside the silhouette-outline-path 2407. The silhouette-outline-path 2407 is in the shape of a vase. The width-line 2406 is a straight line-segment.
Figure 24E shows a width-line 2409 that starts outside a silhouette-outline-path 2408 and ends outside the silhouette-outline-path 2408. The silhouette-outline-path 2408 is in the shape of a vase. The width-line 2409 is a straight line-segment.
Figure 24F shows just a silhouette-outline-path 2410 and a string "PATH.pl" 2411. The silhouette-outline-path 2410 is a named-path identified by the string "PATH.pl" 2411. The implied name of the path is "pi". No width-line is shown. An implied width-line may be determined to be a vertical line-segment that passes through the centre of a bounding box of the silhouette-outline-path 2410.
Figure 24G shows a horizontal width-line 2412 that starts within a silhouette-outline-path 2413 and ends within the silhouette-outline-path 2413. The silhouette-outline-path 2413 is approximately in the shape of a rectangle. The width-line 2412 is a straight line-segment. In this and other examples, the direction of the width-line 2412 is relevant, as the width calculated at the start-point of the width-line 2412, as indicated by the dot, corresponds to the spine-path segment point at the start-point of the corresponding spine-path.
Figure 24H shows a width-line 2414 that starts within a silhouette-outline-path 2415 and ends within the silhouette-outline-path 2415. The silhouette-outline-path 2415 is an irregular shape. The width-line 2414 is a curved line. The direction of the width-line 2414 is bottom-to-top.
Figure 241 shows a width-line 2416 that starts within a silhouette-outline-path 2417 and ends within the silhouette-outline-path 2417. The silhouette-outline-path 2417 is in the shape of a banana. The width-line 2416 is a curved line. The direction of the width-line 2416 is bottom-to-top.
Referring to Figures 25A to 251, there are shown various examples of ways of extracting width data from a silhouette-outline-path. The silhouette-outline-path and width-lines shown in Figures 25A to 251 correspond to those shown in Figures 24A to 241 respectively.
The width-extraction procedure may involve dividing a width-line into the same number of width-line segments as the number of spine-path segments of an associated spine-path. The width-extraction procedure may involve drawing a construction line perpendicular to the direction of the width-line at each width-line segment and noting the point(s) at which the perpendicular construction line intersects the silhouette-outline-path. The width extraction procedure may involve, for each perpendicular construction line, determining the distance between the intersection point(s) of the perpendicular construction line with the silhouette-outline-path. This gives width data for the associated width-line segment point.
Figures 25A to 251 assume that there are two spine-path segments in a corresponding spine-path and hence three-spine segment points.
Figures 25A to 251 each show three dashed construction lines drawn perpendicular to the width-line, and intersecting the width-line at a respective widthline segment point. The points of intersection of the dashed construction lines and the silhouette-outline-path are points ‘al’ and ‘a2’ for the first width-line segment point, ‘bl’ and ‘b2’ for the second width-line segment point, and ‘cl’ and ‘c2’ for the third width-line segment point. The width data based on Figures 25A to 251 may be in the form: {(distance(al,a2), distance(bl,b2), distance(cl,c2)}.
Figures 25A to 25C show that when the width-line does not intersect both ends of the silhouette-outline-path, some width data relating to the silhouette-outline-path is lost.
Figure 25D shows that when an ‘allow zero width’ flag is not set, distance(al,a2) and distance (cl,c2) can be non-zero at the point(s) where the width-line intersects the silhouette-outline-path.
Figure 25E shows that when an ‘allow zero width’ flag is set, distance(al,a2) and distance (cl,c2) are set to zero at the point(s) where the width-line intersects the silhouette-outline-path. However, in some examples, even if the 'allow zero width' flag is set, the input parser module 701 may be configured to set a width-at-segment-point as zero only if the value of width data extracted along the width-line gradually transitions to zero. In such a configuration of the input parser module 701, the width data extracted from the data depicted in Figure 25E will be same as that of Figure 25D even though the 'allow zero width' flag is set. As a further illustration, Figure 15 J shows that, even though the 'allow zero width' flag is set, the width-at-segment-point data may be allowed to be zero at one width-line segment point 1516 in the width-line but is set to distance(l527,1528) for the width-line segment point 1517. This is because the width-at-segment-point values gradually tend towards zero along the width-line in the direction from 1517 to 1516. At the point 1517, even though the intersection of the width-line 1518 with the silhouette-outline 1506 may imply a zero value for a width-at-segment-point, the value is set to distance(l527,1528) because of the abrupt transition to zero width value along the width-line in the direction from 1516 to 1517.
Figure 25F shows that the silhouette-outline-path is specified as a named-path and no width-line is explicitly provided. An implied width-line, w*, may be constructed, for example as a vertical line that passes through the centre of a bounding box of the silhouette-outline-path. Based on w*, width data can then be extracted.
Figure 25G shows a horizontal width-line from right to left. The direction of the width-line is relevant, as the width calculated at the start-point of the width-line is associated with a spine-path segment point at the start-point of a corresponding spine-path. The locations of the labelled intersection points and the order of width data is still (distance(al,a2), distance(bl,b2), distance(cl,c2)}.
Figures 25H and 251 both show a curved width-line from bottom-to-top direction. At each of the three width-line segment points of the width-line, the dashed construction lines are perpendicular to the direction of the width-line at that width-line segment point.
Referring to Figures 26A to 26C, there is shown an example of input data in the form of a spine-diagram 2600 and corresponding generated three-dimensional geometries. The spine-diagram 2600 corresponds to the three-dimensional geometry of a cylinder. The relatively thick line-segment 2601 starting at 2602 and ending at 2603 is a spine-path. The line-segment 2604 that encloses the spine-path 2601 is a spine-range-selector-path. The line-segment 2605 that intersects the spine-range-selector-path 2604 is an association-line. The closed line-segment 2606 that intersects the association-line 2605 is a slice-path. The spine-diagram 2600 may be represented in SVG format by the following text: <?xml version-'1.0"?> <svg width="700" height="1000" xmlns="http://www.w3.org/2000/svg"> <path fill="none" stroke="#000000" stroke-width="8" d="m216.891083,272.58358810.203293,165.197571"/> <path fill="none" stroke="#000000" stroke-width="4" d="ml86.042297,249.517334c37.1698,-0.222534 67.706802,17.435242 90.577301,37.860077c26.417114,23.592133 30.633087,36.830139 34.742004,67.409424c2.335999,17.384796 4.173309,37.945923 1.240784,55.404999c-2.591797,15.430756 -27.383789,62.824524 -47.14978,70.179626c-28.331512,10.54245 -82.399811,-19.713989 -95.540497,-36.013214c-44.339806,-54.99762 -19.211411,-184.842377 65.761597,-205.92189"/> <path fill="none" stroke="#000000" stroke-width="4" d="m248.640289,325.952026c24.274811,2.866608 101.641998,-11.816162 99.860809,-81.273834"/> <path fill="none" stroke="#000000" stroke-width="4" d="m396.238342,248.662689a36.168133,36.168133 0 1 1 -72.336243,0a36.168133,36.168133 0 1 1 72.336243,0z"/> </svg> A wireframe rendering 2607 and a shaded view 2608 of the three-dimensional geometry generated by parsing and processing the spine-diagram 2600 are shown. The generated three-dimensional geometry is a cylinder, shown in perspective projection.
Referring to Figures 27A to 27C, there is shown an example of input data in the form of a spine-diagram 2700 and corresponding generated three-dimensional geometries. The spine-diagram 2700 corresponds to the three-dimensional geometry of a cone. The relatively thick line-segment 2701 starting at 2702 and ending at 2703 is a spine-path. The line-segment 2704 that encloses the spine-path 2701 is a spine-range-selector-path. The line-segments 2705, 2706 that intersect the spine-range-selector-path 2704 are association-lines. The closed line-segment 2707 that intersects just association-line 2705 is a slice-path. The line-segment 2708 that intersects association-line 2706 and another line-segment 2709 is a silhouette-outline-path. The other line-segment 2709 that starts at 2710 and ends at 2711 is a width-line. In Figure 27, the spine-path 2701 and the width-line 2709 each has a dot at one end and an arrow at the other end. In some examples, the input parser module 701 is configured to interpret the dot as a starting point of the line-segment and the arrow as an end-point of the line-segment. These are the start-markers and end-markers respectively. It may be possible to create these markers in SVG creation software applications.
The spine-diagram 2700, including the start and end-markers, may be represented in SVG format by the following text: <?xml version-'1.0"?> <svg width="700" height="1000" xmlns="http://www.w3.org/2000/svg"> <defs > <markerrefX="0" refY="0" orient-'auto" id="DotS" style="overflow:visible"> <path d="m -2.5,-1 c 0,2.76 -2.24,5 -5,5 -2.76,0 -5,-2.24 -5,-5 0,-2.76 2.24,-5 5,-5 2.76,0 5,2.24 5,5 z" transform="matrix(0.2,0,0,0.2,1.48,0.2)" style="fill-rule:evenodd;stroke:#000000;stroke-width: lpt" /> </marker> <marker refX="0" refY="0" orient-'auto" id="Arrow 1 Send" style="overflow:visible"> <path d="M 0,0 5,-5 -12.5,0 5,5 0,0 z" transform="matrix(-0.2,0,0,-0.2,-1.2,0)" style="fill-mle:evenodd;stroke:#000000;stroke-width: lpt" /> </marker> </defs> <path fill="none" stroke="#000000" stroke-width="8" marker-start="url(#DotS)" marker-end="url(#Arrowl Send)" d="m211.288681,178.2608810.191345,155.472855 7> <path fill="none" stroke="#000000" stroke-width="4" d="m 197.194275,127.950928c27.489883,-0.237244 53.717407,18.587021 71.862518,40.36113c20.958862,25.15065 24.303772,39.263153 27.56369,71.862503d.853333,18.533234 3.310974,40.452637 0.984406,59.065063c-2.056274,16.450134 -21.72583,66.974762 -37.407867,74.815765c-22.477722,11.238861 -65.374603,-21.016327-75.800171,-38.392303c-35.178452,-58.630768 -15.242004,-197.053101 52.174133,-219.525162"/> <path fill="none" stroke="#000000" stroke-width="4" d="m275.930084,328.130035c22.845856,2.697845 84.788879,10.601837 116.386169,41.2178347> <path fill="none" stroke="#000000" stroke-width="4" d="m413.721527,286.0266421-71.466675,150.5227661146.095642,0.6324461-74.628967,-151.1552127> <path fill="none" stroke="#000000" stroke-width="4" marker-start="url(#DotS)" marker-end="url(#Arrowl Send)" d="m413.332764,248.0689711.063263,217.9721077> <path fill="none" stroke="#000000" stroke-width="4" d="m338.275177,170.349136c-4.237549,22.610931 -51.916046,48.052521 -73.232391,49.712097"/> <path fill="none" stroke="#000000" stroke-width="4" d="m322.620178,157.247772a34.039013,34.039013 0 1 1 68.078033,0a34.039013,34.039013 0 1 1 -68.078033,0z7> </svg>
The spine-diagram 2700, excluding the start and end-markers, may be represented in SVG format by the following text: <?xml version-'1.0"?> <svg width="700" height-'1000" xmlns="http://www.w3.org/2000/svg"> <path fill="none" stroke="#000000" stroke-width="8" d="m 174.794479,175.57893410.191345,155.472855"/> <path fill="none" stroke="#000000" stroke-width="4" d="m 160.700073,127.268982c27.489883,-0.237244 53.717407,18.587021 71.862503,40.361115c20.958878,25.15065 24.303787,39.263168 27.563705,71.862534cl.853333,18.533218 3.310974,40.452652 0.984406,59.065048c-2.056274,16.450165 -21.725845,66.974792 -37.407867,74.815765c-22.477737,11.238892 -65.374603,-21.016327-75.800171,-38.392273c-35.178459,-58.630798 -15.242004,-197.053131 52.174133,-219.525208"/> <path fill="none" stroke="#000000" stroke-width="4" d="m239.435883,325.44812c22.845856,2.697845 84.788879,10.601837 116.386169,41.217834"/> <path fill="none" stroke-'#000000" stroke-width="4" d="m377.227325,283.3446961-71.466675,150.5227971146.095673,0.6324461-74.628998,-151.155243"/> <path fill="none" stroke="#000000" stroke-width="4" d="m376.838562,245.38703911.063263,217.9721227> <path fill="none" stroke="#000000" stroke-width="4" d="m301.780975,167.667191C-4.237549,22.610916-51.916061,48.052536-73.232391,49.7121127> <path fill="none" stroke="#000000" stroke-width="4" d="m286.126007,154.565842a34.039013,34.039017 0 1 1 68.078003,0a34.039013,34.039017 0 1 1 -68.078003,0z7> </svg> A wireframe rendering 2712 and a shaded view 2713 of the three-dimensional geometry generated by parsing and processing the spine-diagram 2700 are shown. The generated three-dimensional geometry is a cone, shown in perspective projection.
Referring to Figures 28A to 34D, there are shown various examples of spine-diagrams and corresponding wireframe rendering and shaded views. The first column in each of Figures 28 A to 34D shows an example of a spine-diagram. When processed the spine-diagram creates a three-dimensional geometry that is shown in the corresponding second column in wireframe view and the corresponding third column in shaded view. In each of these spine-diagrams, the thicker stroke represents a spine-path. In some of these spine diagrams, the start-point of width-lines and/or spine-paths is represented by a dot start-marker and the end-point is represented by an arrow end-marker.
Referring to Figure 28A, there is shown a spine-diagram 2800 that, when processed, generates a cylinder geometry. A wireframe rendering 2801 and a shaded view 2802 of the output is shown.
Referring to Figure 28B, there is shown a spine-diagram 2803 that, when processed, generates a cylinder geometry with a specified silhouette width. A wireframe rendering 2804 and a shaded view 2805 of the output is shown. The specified silhouette width is same as the diameter of the slice-path in the spine-diagram 2803. As such, the generated output is same as that in Figure 28A.
Referring to Figure 28C, there is shown a spine-diagram 2806 that, when processed, generates a sphere geometry. A wireframe rendering 2807 and a shaded view 2808 of the output is shown. This assumes that zero width is allowed.
Referring to Figure 28D, there is shown a spine-diagram 2809 that, when processed, generates a hemisphere geometry. A wireframe rendering 2810 and a shaded view 2811 of the output is shown. This assumes that zero width is allowed.
Referring to Figure 29A, there is shown a spine-diagram 2900 that, when processed, generates a prism geometry. A wireframe rendering 2901 and a shaded view 2902 of the output is shown.
Referring to Figure 29B, there is shown a spine-diagram 2903 that, when processed, generates a pyramid geometry. A wireframe rendering 2904 and a shaded view 2905 of the output is shown. This assumes that zero width is allowed.
Referring to Figure 29C, there is shown a spine-diagram 2906 that, when processed, generates a cube geometry. A wireframe rendering 2907 and a shaded view 2908 of the output is shown.
Referring to Figure 29D, there is shown a spine-diagram 2909 that, when processed, generates a bent tube geometry. A wireframe rendering 2910 and a shaded view 2911 of the output is shown.
Referring to Figure 30A, there is shown a spine-diagram 3000 that, when processed, generates a cuboid geometry. A wireframe rendering 3001 and a shaded view 3002 of the output is shown.
Referring to Figure 30B, there is shown a spine-diagram 3003 that, when processed, generates a cuboid geometry with irregular cross-section widths. The cuboid is more pointed at one end owing to the nature of the silhouette-outline-path 3004 and width-line 3005 provided in the spine-diagram 3003. A wireframe rendering 3006 and a shaded view 3007 of the output is shown.
Referring to Figure 31 A, there is shown a spine-diagram 3100 that, when processed, generates a banana-shaped geometry. A wireframe rendering 3101 and a shaded view 3102 of the output is shown. Figure 31A illustrates a way of specifying a banana's silhouette-outline-path 3103 and width-line 3104 using curved strokes.
Referring to Figure 3IB, there is shown a spine-diagram 3105 that, when processed, generates a shorter, thicker banana geometry than that shown in Figure 31 A, due to the relatively shorter spine-path. A wireframe rendering 3106 and a shaded view 3107 of the output is shown. The silhouette-outlines in spine-diagrams 3100 and 3105, though different, produce approximately the same width-at-segments data corresponding to that of a banana's shape.
Referring to Figure 32A, there is shown a spine-diagram 3200 that, when processed, generates a torus geometry with a circular cross-section. A wireframe rendering 3201 and a shaded view 3202 of the output is shown.
Referring to Figure 32B, there is shown a spine-diagram 3203 that, when processed, generates a torus geometry with cross-sections of irregular widths. A wireframe rendering 3204 and a shaded view 3205 of the output is shown.
Referring to Figure 32C, there is shown a spine-diagram 3206 that, when processed, generates a torus geometry with circular cross-sections, but where the cross-sectional widths correspond to those of a banana. A wireframe rendering 3207 and a shaded view 3208 of the output is shown.
Referring to Figure 33A, there is shown a spine-diagram 3300 that, when processed, generates a vase geometry with a circular cross-section. A wireframe rendering 3301 and a shaded view 3302 of the output is shown.
Referring to Figure 33B, there is shown a spine-diagram 3303 that, when processed, generates a vase geometry with a circular cross-section at its top, and a starshaped cross section at its bottom, with smoothly interpolated cross-sections in between. A wireframe rendering 3304 and a shaded view 3305 of the output is shown.
Referring to Figure 34A, there is shown a spine-diagram 3400 that, when processed, generates a cuboid geometry. The horizontal spine-path runs from right to left, with a uniform square cross-section. A wireframe rendering 3401 and a shaded view 3402 of the output is shown.
Referring to Figure 34B, there is shown a spine-diagram 3403 that, when processed, generates a cuboid geometry. The horizontal spine-path runs from left to right, with a uniform square cross-section. A wireframe rendering 3404 and a shaded view 3405 of the output is shown.
Referring to Figure 34C, there is shown a spine-diagram 3406 that, when processed, generates a cuboid geometry with cross-sectional width increasing from top to bottom. The vertical spine-path runs from top to bottom, with a square cross-section. The presence of the width-line (top to bottom) over the specified silhouette-outline-path results in increasing width-at-segment-points along the length of the width-line. Since the spine-path takes on the width-at-segment-points in the direction of the width-line, the width-at-segment-points increase from the start of the spine-path to the end of the spine-path (top to bottom). A wireframe rendering 3407 and a shaded view 3408 of the output is shown.
Referring to Figure 34D, there is shown a spine-diagram 3409 that, when processed, generates a cuboid geometry with cross-sectional width decreasing from top to bottom. The vertical spine-path runs from top to bottom, with a square cross-section. The presence of the width-line (bottom to top) over the specified silhouette-outline-path results in decreasing width-at-segment-points along the length of the width-line because of the direction of the width-line. Since the spine-path takes on the width-at-segment-points in the direction of the width-line, the width-at-segment-points decrease from the start of the spine-path to the end of the spine-path (top to bottom). A wireframe rendering 3410 and a shaded view 3411 of the output is shown.
Referring to Figures 35 A to 35F, there are shown examples of possible variations in spine-diagram representations. All six spine-diagrams in Figures 35A to 35F generate the same cylinder-shaped three-dimensional geometry.
Figure 35A shows a spine-diagram 3500 including a spine-path 3501 starting at 3502 and ending at 3503, a spine-range-selector-path 3504 enclosing the spine-path 3501, an association-line 3505 and a slice-path 3506. The spine-range-selector-path 3504 intersects itself. This may not be problematic in terms of being able to recognise and use the spine-range-selector-path 3504 as the spine-range-selector-path 3504 still encloses the spine-path 3501. In cases where the start-point 3502 and end-point 3503 of a spine-path 3501 cannot be determined, for example in a non-SVG image input, the input parser module 701 and/or geometry generator module 702 may be configured to choose a start-point based on one or more user-specified factors. An example of a user-specified factor is the relative x-axis value and/or y-axis value.
Figure 35B shows a spine-diagram 3507 including a spine-path 3508 that starts at 3509 and ends at 3510. A spine-range-selector-path 3511 intersects the spine-path 3508 at two points, A and B, and does not completely enclose the spine-path 3508. A slice-path 3512 is associated with the spine-range-selector-path 3511 by an associationline 3513. In this scenario, the slice-path 3512 is applicable in the range of the spine-path 3508 between A and B. However, since there are no other slice-paths available, the same slice-path 3512 may be applied throughout the spine-path 3508, thus producing the cylindrical three-dimensional geometry.
Figure 35C shows a spine-diagram 3514 including a spine-path 3515 that starts at 3516 and ends at 3517. In this spine-diagram 3514, a start-marker (dot) is present at 3516 and an end-marker (arrow) present at 3517, thus helping easy visual identification of the start-point and end-point of the spine-path 3515. A spine-range-selector-path 3518, association-line 3519 and slice-path 3520 are also shown.
Figure 35D shows a spine-diagram 3521 where the spine-range-selector-path 3522 is neither a closed path nor self-intersecting. However, since all possible points on the spine-path 3523 would still be enclosed by spine-range-selector-path 3522, primitive identification is possible and the spine-range-selector-path 3522 and spine-path 3523 may nevertheless be considered to be valid and associated with each other. The longer path of the association-line 3523 intersecting the spine-range-selector-path 3522 and a slice-path 3524 might not affect its validity.
Figure 35E shows a spine-diagram 3525 including a spine-path 3526 from 3527 to 3528. The spine-path 3526 has a start-marker (dot) but does not have an end-marker.
The spine-diagram 3525 may still be considered to be valid. The input parser module 701 may still be able to identify that the spine-path 3526 starts at 3527 and ends at 3528.
Figure 35F shows a spine-diagram 3529 where a line-segment 3530 is identified to be a spine-path based at least in part on it being a different colour from that of the other input entities in the spine-diagram 3529. If the input parser module 701 is configured to identify that line-segments of that specific colour correspond to spine-paths, then spine-diagram 3529 may be a valid spine-diagram. Note that, in contrast, in Figures 35A to 35E, a line-segment is identified as a spine-path based on its higher relative thickness.
Referring to Figures 36A to 36F, there are shown examples of potentially invalid spine-diagrams.
Figure 36A shows a spine-diagram 3600 that does not include a slice-path. Spine-diagram 3600 includes a spine-path 3601 starting at 3602 and ending at 3603. A spine-range-selector-path 3604 completely encloses the spine-path 3601. However, there is no slice-path associated with the spine-range-selector-path 3604. The input parser module 701 may be configured to consider the spine-diagram 3600 as invalid at least in part on this basis.
Figure 36B shows a spine-diagram 3605 in which a slice-path is not correctly associated with a spine-range-selector-path. Spine-diagram 3605 includes a spine-path 3606 starting at 3607 and ending at 3608. A spine-range-selector-path 3609 encloses a range of the spine-path 3606 and has an association-line 3610. However, the association-line 3610 does not intersect with any other stroke or closed path. The probable intended slice-path 3611 does not intersect with the association-line 3610. Therefore the slice-path 3611 might not be associated with the spine-range-selector-path 3609 by the input parser module 701. The input parser module 701 may be configured to consider the spine-diagram 3605 as invalid at least in part on this basis.
Figure 36C shows a spine-diagram 3612 in which a spine-range-selector-path is not correctly associated with a slice-path. Spine-diagram 3612 includes a spine-path 3613 starting at 3614 with a start-marker (dot) and ending at 3615 with an end-marker (arrow). The spine-range-selector-paths are 3616 and 3617. The spine-range-selector-path 3617 is associated with a slice-path 3618 by means of an association-line 3619.
However, the spine-range-selector-path 3616 does not have any associated slice-path. The input parser module 701 may be configured to consider the spine-diagram 3612 as invalid at least in part on this basis.
Figure 36D shows a spine-diagram 3620 that does not include a slice-path. Spine-diagram 3620 includes a spine-path 3621 starting at 3622 with a start-marker (dot) and ending at 3623 with an end-marker (arrow). A spine-range-selector-path 3624 completely encloses the spine-path 3621. By means of an association-line 3625, the spine-range-selector-path 3624 is associated with a silhouette-outline-path 3626 and a width-line 3627. However, the spine-range-selector-path 3624 does not have any associated slice-path. The input parser module 701 may be configured to consider the spine-diagram 3620 as invalid at least in part on this basis.
Figure 36E shows a spine-diagram 3628 in which an association-line intersects a spine-path. The spine-diagram 3628 includes a spine-path 3629 starting at 3630 with a start-marker (dot) and ending at 3631. A spine-range-selector-path 3632 completely encloses the spine-path 3629. An intended slice-path is 3633. An intended associationline 3634 intersects the slice-path 3633 and the spine-range-selector-path 3632, but also intersects the spine-path 3629. The input parser module 701 may be configured to make one or more assumptions regarding a spine-diagram in order to identify the primitive type of an input entity. For example, an assumption may be that an association-line does not intersect with a spine-path. So in this case, the input parser module might not be able to identify line-segment 3634 as an association-line. The input parser module 701 might therefore fail to associate the slice-path 3633 with the spine-range-selector-path 3632. The input parser module 701 may be configured to consider the spine-diagram 3628 as invalid at least in part on this basis.
Figure 36F shows a spine-diagram 3635 in which a spine-path is not distinguishable from other input entities. Spine-diagram 3635 includes four line-segments 3636, 3637, 3636, 3639 that are of the same colour. The input parser module 701 may be configured to identify a primitive type for at least one of the line-segments 3636, 3637, 3636, 3639 in order to deduce the primitive types of some or all of the other line-segments 3636, 3637, 3636, 3639. The input parser module 701 may be configured to try to identify a spine-path first, for example based on its relative thickness and/or colour compared to that of other input entities. In this example, there is no distinguishing feature between the intended spine-path (most likely line-segment 3636) and the other line-segments 3637, 3636, 3639. The input parser module 701 may be configured to consider the spine-diagram 3635 as invalid at least in part on this basis.
Referring to Figures 37A to 37C, there is shown an example of input data comprising a spine-diagram with a plurality of spine-paths.
Figure 37A shows input data comprising a spine-diagram, indicated with reference sign 3700, where two spine-paths 3701,3702 share a common slice-path 3703 and silhouette-outline-path 3704. When processed, this input data generates two three-dimensional geometrical shapes; a cylinder and a curved, tubular shape as shown in Figures 37B and 37C. The cylinder is shown in wireframe view 3705 and shaded view 3706. The curved, tubular shape is shown in wireframe view 3707 and shaded view 3708.
The input data contains two spine-paths 3701, 3702 with respective spine-range-selector-paths 3709, 3710. Both spine-range-selector-paths 3709, 3710 are associated with the slice-path 3703 using respective association-lines 3711, 3712. Both spine-range-selector-paths 3709, 3710 are associated with the silhouette-outline-path 3704 using respective association-lines 3713, 3714. Line-segment 3715 is a width-line of the silhouette-outline-path 3704.
Referring to Figure 38A, there is shown an example of a spine-diagram comprising a displacement-map. The spine-diagram 3800 includes a spine-path 3801, a spine-range-selector-path 3802, an association-line 3803 and a slice-path 3804. There is an embedded image 3805 enclosed by a dashed line-segment 3806. The dashed line-segment 3806 is associated with the spine-range-selector-path 3802 by a further association-line 3807. There is a circular, shaded area 3808 in the image 3805. Intensity values of pixels in the image may provide values that are used as a displacement (depth) value for one or more vertices and/or one or more faces of the generated three-dimensional geometry. This affects the shape of the generated three-dimensional geometry. The input parser module 701 may for example be configured to categorise any embedded image that is enclosed in a dashed line-segment as a di splacement-map.
Referring to Figure 38B, there is shown a shaded view 3809 of the three-dimensional geometry 3810 generated from the spine-diagram 3800. The three-dimensional geometry 3810 has vertices in a circular region 3811 of the generated geometry 3810 that protrude outside the geometry 3810. Depending on the pixel values of the image 3805, a negative depth may also be achieved. One among a variety of mapping techniques, for example UV-unwrapping, could be used to map pixels in the image 3805 to corresponding locations in the geometry 3810.
Referring to Figure 39A, there is shown an example of a spine-diagram comprising a hole-path. The spine-diagram 3900 includes a spine-path 3901, a spine-range-selector-path 3902, an association-line 3903 and a slice-path 3904. The spine-diagram 3900 also includes a line-segment 3905 representing a hole-path primitive. The input parser module 701 may be configured to categorise line-segment 3905 as a hole-path primitive based at least in part on a colour of the line-segment 3905, for example.
Referring to Figure 39B, there is shown the three-dimensional geometrical shape 3906 generated by the spine-diagram 3900. The shape 3906 is a non-spine Boolean geometry created by subtracting a small cylindrical shape 3907 from a larger cylindrical shape 3908 of the same height.
Referring to Figure 40A, there is shown an example of a spine-diagram comprising a hole-map. A hole-map may be an image that helps create one or more holes in one or more regions on the surface of the generated three-dimensional geometry. This may be achieved by removing or moving one or more relevant vertices and/or one or more relevant faces on the surface of the geometry corresponding to the area marked in the hole-map image. The depth of the hole(s) may be controlled by the pixel values of the embedded image.
The spine-diagram 4000 includes a spine-path 4001, a spine-range-selector-path 4002, an association-line 4003 and a slice-path 4004. There is an embedded image 4005 enclosed by a dotted line-segment 4006. The dotted line-segment 4006 is associated with the spine-range-selector-path 4002 by a further association-line 4007. There is a circular line-segment 4008 in the image 4005. The colour of the line-segment 4008 in the image 4005 may provide a value that is used to calculate a location of the vertices to be removed or moved in the generated three-dimensional geometry. The input parser module 701 may be configured to categorise any embedded image and/or SVG data that is enclosed in a dotted stroke path as a hole-map. An SVG stroke equivalent may be used in place of the embedded image 4005. Note that a hole-path (described above) creates one or more holes in the direction of a spine-path, whereas a hole-map create one or more holes in any selected region of the generated geometry.
Referring to Figure 40B, there is shown a shaded view 4008 of the three-dimensional geometry 4009 generated using the spine-diagram 4000. The three-dimensional geometry 4009 has a circular-shaped hole 4010 in a region of the generated geometry. One among a variety of mapping techniques, for example UV-unwrapping, could be chosen to map pixels in the image 4005 to corresponding locations in the geometry 4009.
Referring to Figure 41A, there is shown an example of a spine-diagram comprising a marker-map. A marker-map may be an image that defines the location of one or more marker-points relative to the generated three-dimensional geometry.
The spine-diagram 4100 includes a spine-path 4101, a spine-range-selector-path 4102, an association-line 4103 and a slice-path 4104. There is an embedded image 4105 enclosed by a dotted-and-dashed line-segment 4106. The dotted-and-dashed line-segment 4106 is associated with the spine-range-selector-path 4102 by a further association-line 4107. There are three square-shaped regions of various colours in the image 4105. Each pixel in the image 4105 may correspond to a location on the surface of the generated three-dimensional geometry. The colours of the squares in the image 4105 may provide a value that is used to calculate a depth of the location of an associated marker-point from the surface of the generated three-dimensional geometry. In this example, the input parser module 701 is configured to categorise any embedded image and/or SVG data that is enclosed in a dotted-and-dashed stroke path as a marker-map. An SVG stroke equivalent may be used in place of the embedded image 4105. Note that in Figures 38A, 40A and 41 A, the enclosing paths have different stroke-types (dashed, dotted, and dashed-and-dotted respectively) to enable identification of the corresponding map type. The same may be achieved for example by specifying different colours for these enclosing strokes, provided the input parser module 701 is configured accordingly.
Figure 4 IB shows the positions of the generated marker-points in the generated three-dimensional geometry 4108. The squares 4109, 4110, 4111 in the marker-map shown in Figure 41A correspond to the locations of the marker-points 4112, 4113,4114 in Figure 41B respectively. The name-string “MARKER.kl”4115 in the marker-map is recognised by the input parser module 701. The name 'kl' may be assigned to the marker-point 4114 corresponding to the nearest square 4111 to the name-string 4115. In this example, marker-point 4112 is deeper inside the circular surface of the cylinder 4108 as a result of the relative colour difference of square 4109 compared to squares 4110 and 4111.
Figure 42A shows a spine-diagram 4200 with a spine-path 4201 starting at 4202 and ending at 4203. The spine-path is enclosed by a spine-range-selector-path 4204. A first association-line 4205 associates the spine-range-selector-path 4204 to the circular slice-path 4206. A second association-line 4207 associates the spine range selector 4204 with the stroke 4208. The stroke 4208 in turn encloses a z-path stroke 4209. The z-path stroke 4209 starts at 4210, as indicated by a start-marker in the form of a dark square, and ends at 4211, as indicated by an end-marker in the form of an empty diamond. The input parser module 701 may be configured such that any stroke with a square start-marker and diamond end-marker should be considered as a z-path. Other mechanisms for recognising z-paths may be used. For example, a z-path may be recognised if it is a given colour. Data extracted from the z-path is used to produce displacement in the third dimension.
Figure 42B shows an example of a method of extracting z-values at different points along the z-path 4209. A horizontal line A-B is drawn through the start-point 4210 of the z-path. The horizontal line is clipped from the start-point 4210 to the endpoint 4211 to coincide with the z-path projection on the line. In this example, the line-segment from 4210 to 4211 is divided into four z-path segments, creating five equidistant z-path segment points. The number of z-path segments of the clipped line may be the same as that of a corresponding spine-path. A vertical construction line is drawn at each of the z-path segment points. The distance (z-value) between the z-path segment point and the point at which the corresponding vertical construction line intersects the z-path stroke is calculated for each z-path segment point. In this illustration, the z-value calculated for the first and fifth z-path segment points is approximately zero. The second, third and fourth z-path segment points have z-values represented by the length of the line-segments 4212, 4213, 4214 respectively. After calculating these z-values, the z-values are applied as displacement values to the corresponding spine-path segment points, creating a three-dimensional spine-path. During the remaining steps in geometry creation, slice planes may be made perpendicular to the now three-dimensional spine-path.
Figure 42C shows a shaded three-dimensional scene view of a helix-shaped geometry 4215 generated from the spine-diagram 4200. The slice planes may be perpendicular to the spine-path at each spine-path segment point, although they are not necessarily so in this illustration. By default, the z-value of the first segment point 4210 may be considered to be zero. Alternatively, an offset value may be specified by a user. A scale factor of the z-path may be specified by the user. If so, thus the z-values are multiplied by this scale factor.
Figure 43 shows input data comprising a plurality of paths 4300, 4301, 4302, 4303, 4304, 4305 and a plurality of text strings 4306, 4307, 4308, 4309, 4310, 4311. The input data may represent block of SVG input 4312, or the contents of an SVG file where there are multiple paths. In this example, most of the paths have start-markers and end-markers, and all of the paths have embedded text strings placed nearby, with the common prefix being "PATH".
The input parser module 701 may be configured to process such input data, assign the name attribute extracted from each text string to the nearest path and generate a data-structure including corresponding data. For example, the name assignment could be based on the proximity of the start-point of a path to the start-point of the embedded text string. Such paths, after conversion to the appropriate data-structure, may be referenced and used in the three-dimensional workspace for example by via rule execution and/or by the scripting module 206. For example, a named-path may be referenced and used as a slice-path, or a width-line for a spine.
Figure 44A shows a spine-diagram 4400 with named entities and rules. The text strings "SPINE.my object" 4401, “SLICE.mycircle” 4402, “ SLICE. square2” 4403 and “SILHOUETTE.outlinel” 4404 are embedded in the input data such that they are relatively close to the spine-path 4405, slice-path 4406, slice-path 4407 and silhouette-outline-path 4408 respectively. The input parser module 701, on encountering embedded text, may be configured to determine the category of the primitive to which the text string should be assigned, find the nearest primitive matching the type and assign the name to the primitive. For example, the text string "SPINE.my object" implies that the name "myobject" should be assigned to the spine corresponding to the nearest spine-path; in this example spine-path 4405. Further, there are two text strings with prefix "SLICE", and there are two slice-paths 4406, 4407 in the spine-diagram 4400. The name "mycircle" is assigned to slice-path 4406 because the centre of a bounding box of slice-path 4406 is closer to the string 4402 than is a bounding box of slice-path 4407. The spine-diagram 4400 also contains the strings 4409, 4410, 4411: RULES: myobject:setSegments[50]; myobj ect: setLength[210];
The input parser module 701 is configured to process these strings 4409, 4410, 4411 by identifying the prefix "RULES" 4409 and inserting the rule-strings 4410, 4411 in an output data-structure described above. These rules or computer-readable instructions may be executed by the scripting module 206, which results in changes in the generated geometry.
Figure 44B shows a wireframe view 4412 of the generated geometry. Figure 44B shows a shaded view 4413 of the generated geometry in a three-dimensional scene containing a reference grid and X, Y, and Z axes. The execution of the rules 4410 and 4411 by the post-processor module 703 has resulted in the generated geometry having 50 segments in the spine-path and the length of the spine-path is set to 210 units, thereby influencing the shape and resolution of the generated geometry shown in 4412 and 4413.
The post-processor module 703 may be configured to perform one or more postprocessing operations in relation to the three-dimensional geometry and/or three-dimensional model. The post-processor module 703 may be configured to check the validity of input data. The post-processor module 703 may be configured to check the validity of the data output by the input parser module 701.
Referring to Figure 45, there is shown an example of a workflow of an example post-processor module 703.
At 4500, the post-processor module 703 executes one or more computer-readable instructions, rules, declarations and/or definitions if any are provided. The one or more computer-readable instructions may be provided by a user. In some examples, a computer-readable instruction, on execution, can create and/or modify the shape, orientation and/or one or more other attributes of the three-dimensional geometries generated by the geometry generator module 702.
At 4501, the post-processor module 703 calculates the location and/or one or more other attributes of one or more marker-objects. Marker-objects are described in more detail below. An example of a post-processing operation is calculating a position and/or other associated data of a marker-point. Another example of a post-processing operation is calculating a position and/or other associated data of a marker-vector. Another example of a post-processing operation is calculating a position and/or associated data of a marker-plane. Another example of a post-processing operation is calculating a position and/or associated data of a marker-polygon.
At 4502, the post-processor module 703 applies any user-requested modification on the generated geometry and/or data-structures.
At 4503, the post-processor module 703 stores any generated spines and/or nonspines in a designated storage area and/or displays any generated spines and/or nonspines in the three-dimensional workspace. Any generated spines and non-spines may be stored as files in a designated file storage area, for example in the local computing device or in a network storage device. If a three-dimensional workspace and display device are available, the spines and/or non-spines may be added to the workspace 210 and their generated three-dimensional geometry rendered in a three-dimensional view.
One or more marker-objects may be defined and used to facilitate assembly of a three-dimensional geometry and/or control its position and/or orientation in the three-dimensional workspace, for example with respect to other three-dimensional entities in the three-dimensional workspace.
Examples of marker-objects include, but are not limited to marker-points, marker-vectors, marker-planes and marker-polygons. A marker-point is a point in three-dimensional space, defined by one or more parameters. A marker-vector is a vector connecting two or more marker-points in three-dimensional space. A marker-plane is a plane in the three-dimensional workspace 209 constructed using at least three marker-points in the three-dimensional space. A marker-polygon is a polygon constructed using three or more marker-points in three-dimensional space. A marker-object may be assigned as a child object to a spine, non-spine and/or the three-dimensional workspace. In such cases, it may be referred to as a spine-marker, non-spine marker and scene-marker respectively. A marker-object may be defined by a combination of parameters with non-static values. Its location in three-dimensional space may be evaluated whenever a parent spine, non-spine or scene is modified. Another example of a modification that may result in evaluation of the location in three-dimensional space is a position modification. Another example of a modification that may result in evaluation of the location in three-dimensional space is an orientation modification. Another example of a modification that may result in evaluation of the location in three-dimensional space is a scale modification. Another example of a modification that may result in evaluation of the location in three-dimensional space is a translation modification. Another example of a modification that may result in evaluation of the location in three-dimensional space is a rotation modification.
It may be possible to create, identify, reference and/or use one or more specific locations in three-dimensional space. Such a specific location is referred to herein as a ‘marker-point’. A marker-point may be assigned a uniquely addressable name. A marker-point may be assigned an expression, which, when evaluated, provides a point in three-dimensional space with X, Y and Z coordinates. The expression may for example be embedded in input data. The expression may for example be provided by one or more computer readable-instructions. The expression may for example be provided by manually locating the point when navigating in three-dimensional space. A marker-point may have one or more marker-point properties. An example of a marker-point property is a name. The name may be in the form of a name string. The name string may have a parent name embedded as sub-string. Another example of a marker-point property is a parent name. A parent name may be implied by the value of the name property. Another example of a marker-point property is an expression. A reference baseline may be used when evaluating one or more parameters of a marker-point. The reference baseline may, for example, be a horizontal line that passes through the spine-point, centroid or centre-point of the slice-path. A marker-point may be expressed by a combination of one or more of the following parameters. ‘s’: a measure (for example a ratio) of a distance along a spine-path relative to the total length of the spine-path, ‘a’: a measure (for example an angle) of radiation of a line that lies on a plane formed by a scale-transformed slice at the point defined by the parameter V; the plane being perpendicular to the spine-path at the point defined by the parameter 's'; the line makes an angle ‘a’ with respect to the reference baseline. V: a measure (for example a ratio) of the distance from the spine-point at 's' to a point of intersection with a slice-path, for a line radiating from point 's' and making an angle 'a' with the reference baseline, perpendicular to the direction of the spine-path. 7?’: a measure (for example an angle) of rotation about a line that passes through the spine-point and lies on plane formed by the boundary points, ‘if: an additional measure (for example a distance) to be added to a calculated point along the direction of a line that is perpendicular to a slice plane, ‘/w’: spine-marker. slice-marker, ‘A’: silhouette-marker, ‘c’: centre point through which the reference baseline passes.
Another way of defining a location of a marker-point is to provide additional parameters that contain X, Y and Z coordinate displacement values. These displacement values may then be added to, or subtracted from, the marker-point value calculated by its other parameters. Another way of defining a location of a marker-point is a static marker-point. A static marker-point defines a point in three-dimensional space directly by its X, Y and Z coordinate values. Another way of defining a location of a marker-point is a marker-map. A marker-map may comprise image, SVG and/or matrix data that defines the location of various points on or inside the three-dimensional geometry. Users of the system may manually create and move an arbitrary point in three-dimensional space and assign it to a spine or non-spine. This point may be used as a static marker-point and/or may be used to calculate a combination of the above parameters in which the point can be expressed to arrive at the same three-dimensional location. Another way of defining a location of a marker-point is manual creation of a point in three-dimensional space, for example using a GUI.
Figures 46A to 50D illustrate how a point (referred to as a ‘marker-point’) in three-dimensional space can be defined in terms of one or more parameters, based on the relative position of the point to the generated three-dimensional geometry.
Figures 46A to 46G show how points in three-dimensional space may be expressed and/or defined in terms of the parameters 's', 'a', and V.
Figure 46A shows a spine-diagram 4600.
Figures 46B shows that the value of the parameter 's' is ‘0’ at the start-point of a spine-path 4601 and ‘100’ at the end-point of the spine-path 4601. The value of V may be expressed as percentage of the length of spine-path 4601 from the start-point of the spine-path 4601 to the specific point in question on the spine-path 4601.
Figures 46C to E show a top-view of the cross-section (the slice-path 4602), and how the parameter values of V and 'a' may be determined. In the circular cross-section, the point '0' represents the spine-point. In this example, the spine-point is the centroid of the slice-path 4602. A line-segment OB' is drawn at an angle 'a' degrees to the dashed horizontal line 4603 that passes through the spine-point. The point of intersection of OB with the slice-path 4602 is noted as point A. Point 'C' is located on the line-segment OB. Its V parameter value is calculated as the percentage ratio of OC to OA.
In Figure 46C, the 'a' value of point C is approximately 20 degrees. The V value of point C is calculated as distance(OC)* 100/distance(OA). Here, distance(OC) is equal to distance(OA). Therefore r=100 for point C.
In Figure 46D, the 'a' value of point C is approximately 20 degrees. The V value of point C is distance(OC)* 100/distance/OA). Here, distance(OC) is half the distance(OA). Therefore, r=50 for point C.
In Figure 46E, the 'd value of point C is 0 degrees. The V value of point C is distance(OC)*100/distance(OA). Here, distance(OC) is half the distance(OA). Therefore r=50 for point C.
Figure 46F shows various points placed in their evaluated locations in three-dimensional space. It is assumed that the distance QX is 35% of the spine length, QY is 60% of the spine length, and QW is 100% of the spine length.
Figure 46F and Figure 46G show named points in three-dimensional space and the 'd, V and 's' values used to define the location of those points.
Figures 47A to 47C shows how points in three-dimensional space can be expressed and/or defined in terms of the parameters 's', 'a', V and also the parameters 'd and 'p'. The three-dimensional geometries shown in Figures 47A and 47B are generated based on the spine diagram 4600 shown in 46A.
In Figure 47A, the marker-point 'S', whose expression is 'a30rl00s0/?40', is evaluated and whose location in three-dimensional space is determined. The parameter 'p' indicates the angle of rotation about a line that passes through the spine-point 'O', lies on the cross-sectional plane, and is perpendicular to OR.
Figure 47B shows that the marker-point 'F', whose expression is 'a30/400.v0i/90', is evaluated and whose location in three-dimensional space is determined. The parameter 'd indicates the units of additional distance (FH) in the direction perpendicular to the cross-section.
Figure 47C shows various marker-point names and their associated expressions. The evaluated locations in three-dimensional space are shown in Figures 47A and 47B.
Figures 48A to 48C show how the location of a slice-marker may be defined and/or evaluated.
Figure 48A shows a spine-diagram 4800 with a spine-path 4801, a spine-range-selector-path 4802, an association-line 4803, a slice-path 4804, a square 4805 marking the relative location of a slice-marker within the slice-path 4804, and a name-string 4806 providing the name of the slice-marker as 'x'. The input parser module 701 may be configured to identify a filled square within a slice-path as a slice-marker primitive. The slice-marker 4805 is associated with the name 'x' because it is the nearest namestring that matches the type, based on the name-string prefix 'ST JCF,MARKER'
Figure 48B shows how the named slice-marker and its location can be used to extract the value of the parameters 'a' and V. In this example, the relative location of the slice-marker in the cross-section is B. The line-segment OA passing through B makes an angle of 30 degrees with the horizontal dotted line. Therefore the V value of B is CB*100/CA, which is approximately 60 in this example, and the ‘a’ value is 30.
Figure 48C shows the location of the point 'S' in three-dimensional space. The point 'S', expressed by the term 'kx s0' is evaluated and is equivalent to the expression 'a30r60s0'.
Figure 48D shows the names and expressions of several marker-points and Figure 48C shows their evaluated locations in three-dimensional space.
Figures 49A to 49C show how the location of a spine-marker may be defined and/or evaluated.
Figure 49A shows a spine-diagram 4900 with a spine-path 4901, a spine-range-selector-path 4902, an association-line 4903, a slice-path 4904, a square 4905 marking the relative location of a spine-marker on the spine-path 4901, and a name-string 4906 providing the name of the spine-marker as 'y'. The input parser module 701 may be configured to identify any filled squares on a spine-path as a spine-marker primitive. The spine-marker is associated with the name 'y' because it is the nearest name-string that matches the type, based on the name-string prefix 'SPINEMARKER'. The distance of the spine-marker square 4905 from the start-point of the spine-path is approximately 70% of the total length of the spine-path. Thus, the spine-marker may be used to express an 's' parameter value in a more visual way.
Figure 49B shows the location of the points 'S', 'R' and TJ' in three-dimensional space. The parameter value of 'rri is 'y', which corresponds to an 's' parameter value of 70.
Figure 49C shows the names and expressions of various marker-points and Figure 49B shows their evaluated locations in three-dimensional space.
Figures 50A to 50D show how the location of a silhouette-markers may be defined and/or evaluated.
Figure 50A shows a spine-diagram 5000 with spine-path 5001, a spine-range-selector-path 5002, association-lines 5003, 5004, a slice-path 5005, a silhouette-outline-path 5006, a width-line 5007, a square 5008 marking the relative location of a silhouette-marker within the silhouette-outline-path 5006, and a name-string 5009 providing the name of the silhouette-marker as 'z'. The input parser module 701 may be configured to identify any filled squares within a silhouette-outline-path as a silhouette-marker primitive. The silhouette-marker is associated with the name 'z' because it is the nearest name-string that matches the type, based on the name-string prefix 'SILHOUETTEMARKER'.
Figure 50B shows how the relative location of the silhouette marker 5008 (B) with the silhouette-outline-path 5006 and the width-line 5007 is used to extract the 'r' and 's' parameter values from the silhouette marker (B). The ‘r’ value is distance(AB)*100/distance(AC). The 's' value is calculated as distance(DA)* 100/distance(DE). Approximate values are r=50 and 5=70.
Figure 50C shows the location of various points in three-dimensional space. The parameter value of'//' is 'z', which corresponds to the 's' parameter value of'70' and V parameter value of 50.
Figure 50D shows the names and expressions of various marker-points and Figure 50C shows their evaluated locations in three-dimensional space. A marker-vector is a vector connecting two or more marker-points in three-dimensional space. The marker-vectors may be defined by specifying two marker-points, the first defining the starting-point of the vector and the second defining the endpoint of the vector. A marker-planes is a plane in the three-dimensional workspace constructed using at least three marker-points. The direction of the plane may depend on the order in which the constituent points are provided as input. For example, the apparatus could be configured such that it should expect the three input marker-points to be provided in an anti-clockwise direction, thus determining the direction of a plane normal. A marker-polygon is a polygon constructed using three or more marker-points. The direction of the polygon normal may be determined based at least in part on an order in which the constituent points are provided as input.
The apparatus may be configured to define one or more predefined marker-objects to enable quick usage without further definition. An example of a predefined marker-object is a marker-point named ORIGIN'. The ORIGIN’ marker-point may be defined to be at (0,0,0) in Χ,Υ,Ζ coordinates. Other examples of predefined marker-objects are marker-vectors named 'XAXIS', 'YAXIS' and 'ZAXIS', which can be defined to represent the X, Y and Z axes directions respectively with a predetermined magnitude value. Other examples of predefined marker-objects are marker-planes named 'XYPLANE', 'YZPLANE', 'XZPLANE', which can be defined to represent the planes formed by three points lying on the respective planes. Other examples of predefined marker-objects are marker-planes named, for example, '-XAXIS' and '-XYPLANE' can be used to represent the X axis in the negative direction and the XYPLANE with an inverted normal respectively. These and other marker-objects may be used by referencing them in rules and/or using them in the assembly stage. A marker-object may be a child entity of a parent object. Examples of parent objects include, but are not limited to, a spine, a non-spine or a three-dimensional scene. The apparatus may be configured to allow the inheritance of marker-objects. For example, if a spine is converted to a non-spine, one or more marker-objects of the spine may be inherited by the non-spine. The inherited one or more marker-objects in the non-spine may be made static, meaning that their expression is not dynamically evaluated when their parent is modified. When two objects (for example a spine and a non-spine) undergo a Boolean operation, the resultant non-spine may inherit some or all marker-objects of the two input objects. A marker-object may be edited by the user to change one or more of its properties. Examples of such properties include, but are not limited to, name and expression. A user may be able to delete a marker-object. However, if there are one or more objects in the three-dimensional workspace that depend on the marker-object being deleted, the apparatus may be configured to warn the user. The apparatus may be configured to prompt the user either to remove any dependency rules in the dependent object while deleting the marker-object or to cancel the marker-object deletion operation.
Returning to Figure 2, in some examples, the apparatus 201 comprises a modification and assembly module 205. The modification and assembly module 205 is configured to perform one or more modification operations on the generated three-dimensional geometry to modify one or more attributes of the three-dimensional geometry. An example of such an attribute is shape. Another example of such an attribute is location. Another example of such an attribute is orientation.
The modification and assembly module 205 may contain a list of operations that can be executed on a spine, non-spine, or any other three-dimensional workspace entity to change one or more of its attributes. Such operations may also be executed on a set of one or more spines and/or one or more non-spines to create one or more new spines and/or or non-spines and/or to change existing shapes or other attributes.
To perform a modification or assembly task, the modification and assembly module 205 may receive input in various different forms. One example form is a rule provided by a user. Another example form is a rule implied by a user. Another example form is an action taken by the user via or in a graphical user interface. An action by the user in the graphical user interface may be mapped to an associated action. Examples of such actions include, but are not limited to, clicking on an icon or menu item, or dragging an arrow of a transform control that would translate the geometry. Another example form is a rule available via a data communications network.
The modification and assembly module 205 may be configured to perform a translation operation. In a translation operation, the three-dimensional geometry is moved by a specified number of units in a specified direction or axis.
The modification and assembly module 205 may be configured to perform a rotation operation. In a rotation operation, the three-dimensional geometry is rotated about a specified axis by a specified angle.
The modification and assembly module 205 may be configured to perform a scale operation. In a scale operation, the size of the three-dimensional geometry is increased or decreased along a specified direction or axis.
The modification and assembly module 205 may be configured to perform a thickness modification. In a thickness modification, the perpendicular distance from a point on the spine-path to its corresponding surface point on the three-dimensional geometry is controlled, thereby controlling the thickness of the three-dimensional geometry.
The modification and assembly module 205 may be configured to perform a length modification. In a length modification, the length of the spine-path is controlled. This thereby influences the resultant generated shape of the three-dimensional geometry.
The modification and assembly module 205 may be configured to perform a decompose operation. In a decompose operation, a non-spine is deconstructed into its constituent spines and non-spines. Recursive decompose operations on a non-spine produces a set of spines.
The modification and assembly module 205 may be configured to perform a constructive solid geometry operation. In a constructive solid geometry operation, two or more three-dimensional geometries are combined using Boolean operations to form a new three-dimensional geometry. Examples of Boolean operations include, but are not limited to, a union operation to merge two three-dimensional geometries, a subtraction operation to subtract one three-dimensional geometry from the other, and an intersection operation to generate the portion common to two three-dimensional geometries.
The modification and assembly module 205 may be configured to perform an orientation and placement operation. In an orientation and placement operation, the orientation and/or location of the three-dimensional geometry may be influenced by referencing a marker-point, marker-vector, marker-plane or marker-polygon in the workspace or of the geometries in the workspace. For example, when a marker-point is moved to a specific location in three-dimensional space, the parent object (spine or nonspine) with which the marker-point is associated may also be moved such that the relative positioning between the marker-point and its parent object is maintained. In another example, when a marker-vector is made perpendicular to the XYPLANE in three-dimensional space, the parent object (spine or non-spine) with which the marker-vector is associated, may also undergo an orientation change such that the relative orientation between the marker-vector and its parent object is maintained.
The modification and assembly module 205 may be configured to perform a displacement-map application operation. In a displacement-map application operation, the roughness of the surface of the three-dimensional geometry is controlled by affecting the depth of the vertices and faces on the surface of the three-dimensional geometry corresponding to the area marked in the image or matrix provided.
The modification and assembly module 205 may be configured to perform a hole-map application operation. In a hole-map application operation, the presence and shape of holes on the surface of the three-dimensional geometry is controlled.
The modification and assembly module 205 may be configured to perform a colour-map application operation. In a colour-map application operation one or more colours of the vertices and/or faces on the surface of the three-dimensional geometry are controlled.
The modification and assembly module 205 may be configured to perform a marker-map application operation. In a marker-map application operation, the location of named marker-points near, on or inside the surface of the three-dimensional geometry are controlled.
The modification and assembly module 205 may be configured to perform vertex-modification and face-modification operations. In a vertex-modification operation or face-modification operation, vertex points of the geometry are moved, inserted and/or deleted. This modifies the shape of the three-dimensional geometry in different ways. Examples of ways in which the shape of the three-dimensional geometry can be modified include, but are not limited to, hollowing out of the three-dimensional geometry, smoothing of the three-dimensional geometry, splitting the three-dimensional geometry into multiple parts, changing the shape or surface of a selective portion of the surface of three-dimensional geometry, enclosing one three-dimensional geometry within another three-dimensional geometry by modifying vertices of one or both of the geometries, welding two three-dimensional geometries together by performing smooth joints at points and regions of intersection.
The modification and assembly module 205 may be configured to change the shape or surface of a selective portion of the three-dimensional geometry depending on its relative position or orientation to another three-dimensional geometry.
The modification and assembly module 205 may be configured to one or more assign attributes to selective vertices and/or faces of the three-dimensional geometry. Examples of such attributes include, but are not limited to colour and texture.
The modification and assembly module 205 may be configured to declare or modify marker-points, marker-vectors, marker-planes or marker-polygons near, on, or inside the three-dimensional geometry.
The modification and assembly module 205 may be configured to insert, change and/or delete one or more holes in one or more selected regions of the three-dimensional geometry.
The modification and assembly module 205 may be configured to adjust the resolution of the overall three-dimensional geometry and/or selective regions of the three-dimensional geometry.
The modification and assembly module 205 may be configured to group and/or merge multiple spines or non-spines to a single non-spine.
The modification and assembly module 205 may be configured to change a spine to a non-spine by modifying one or more attributes of the spine.
The modification and assembly module 205 may be configured to insert, modify and/or delete the rules associated with the creation history or parameters of a spine. This may trigger regeneration of the three-dimensional geometry associated wi5th the spine.
The modification and assembly module 205 may be configured to insert, modify and/or delete the rules associated with the creation history of a non-spine. This may trigger reconstruction of the non-spine and regeneration of the three-dimensional geometry associated with the non-spine.
The modification and assembly module 205 may be configured to change the value of an attribute in the data-structure of a spine (for example a spine-path). This may trigger a reconstruction of the three-dimensional geometry associated with the spine.
The modification and assembly module 205 may be configured to insert, modify and/or delete a constituent spine of a non-spine. This may trigger reconstruction of the non-spine and regeneration of the three-dimensional geometry associated with the nonspine.
In some examples, the apparatus 201 comprises a scripting module 206. The scripting module 206 is configured to execute one or more user-specified rules and/or commands in relation to the generated three-dimensional model. Examples of such rules and/or commands include, but are not limited to, import, export and modify. A set of rules may be combined to form a ‘script’. When the script is executed one or more implied actions are performed. The set of rules can be set to run synchronously, where the scripting module 206 is configured to wait for completion of one rule execution before executing the next rule. The set of rules can be set to run asynchronously, where the scripting module 206 is configured not to wait for completion of a rule execution before executing the next rule. Whether the rules are run synchronously or asynchronously may depend, for example on the type of action and/or depending on user choice. Scripts may be stored and referenced from a given network location and/or from a user’s local computing device. A GUI may provide an option for the user to input rules and/or run the rules on demand.
Users may be allowed to create custom scripts. Custom scripts may be set as private or public files in their workspace directory locally and/or in the network. A rule may be considered to be a computer-readable instruction. A rule may be provided by the user to the apparatus 100, 201. A rule may be generated by the apparatus and/or an external software application, for example based on instructions from the user. A rule may be in the form of a text string in an acceptable syntax format. Based at least in part on the rule syntax, the apparatus performs the implied action in the three-dimensional workspace 210. A rule may be accepted by the apparatus at various stages of the workflow. For example, a rule may be used by the input module 101, 201, the modification and assembly module 205 and/or the scripting module 206. A rule may be specified in the input data 203 provided to the input module 202. A rule may be restricted to run on one or more specific types of object. For example, a rule ‘setSegments’, which changes the number of spine-path segments in a spine-path may be restricted to be run on a spine. A rule may contain a name of and/or reference to a named primitive. A rule may contain a name of and/or reference to an entity in the three-dimensional workspace 209. A rule may contain a name of and/or reference to a file accessible by the apparatus 100, 201. A rule may contain a name of and/or reference to one or more users of the apparatus 100, 201, for example by username and/or another identifiable attribute. A rule may contain a name of and/or reference to a desired output type. A rule may contain a name of and/or reference to a desired name of an output file or other output data.
An example of a rule is an instruction to declare, describe, modify and/or delete a primitive, spine, non-spine and/or marker-object. Another example of a rule is an instruction for a transformation operation to be performed on one or more generated geometries, during and/or after generation. Another example of a rule is an instruction to modify a desired shape, thickness, length, resolution and/or other property of a generated geometry, during and/or after generation. Another example of a rule is an instruction that affects the location of one or more vertices and/or faces of a generated. Another example of a rule is an instruction containing metadata such as, but not limited to, a named variable with one or more values, a name string, a filename, a remote location detail and/or network connection information. Another example of a rule is an instruction to generate one or more geometries based on the input data available. Another example of a rule is an instruction on the storage of the generated model and/or retrieval of related information over the network.
When the execution of a rule results in a modification of an object (for example a spine or non-spine) or affects the attributes of the object in any way, the executed rule may be associated with the corresponding object. This association may be carried out by appending the rule in creation history metadata associated with the object. The apparatus may be configured to allow the insertion, modification, deletion, deactivation and/or activation of one or more rules in the creation history metadata. This may allow the object to be recreated by executing one or more of the rules from a previous state of the object. A rule associated with an object may be set to be evaluated dynamically. For example, when an object is modified, some or all of the rules in the creation history metadata associated with the object may be set to be re-executed. This recreates the object by replaying the rules from a previous state of the object.
In a GUI implementing some aspects of the apparatus 100, 201, a rule may be mapped to an icon or a menu item in the GUI. Selecting the icon or menu item may trigger the implied action associated with the relevant rule in the three-dimensional workspace.
In some examples, the apparatus 201 comprises an analysis module 207. The analysis module 207 is configured to analyse a creation history of the three-dimensional model of the object. The analysis module 207 is configured to generate creation history data based at least in part on the analysis. The functionality provided by the analysis module 207 may assist a user in understanding how the three-dimensional model of the object was created from input data.
The three-dimensional models created by the apparatus may be in the form of a combination of one or more spines and/or one or more non-spines. The shape of a geometry associated with a spine is based at least in part on data in the input data, other spine properties and/or one or more rules executed on the spine. The shape of a geometry associated with a non-spine is based on its constituent reference spine(s) and/or reference non-spine(s) along with any rules executed on the non-spine. As such, the shape of a geometry associated with a non-spine can be traced down to one or more spines.
Since the creation history metadata and properties of the geometries are available, the geometries in the three-dimensional workspace can be deconstructed on demand. For example, a geometry created by the Boolean union of two other geometries can be deconstructed to create the original two geometries.
As such, the overall three-dimensional scene may be visualised as a tree structure in which each first-level node is a spine or non-spine. A non-spine node can be expanded to reveal its constituent reference spine(s) and/or reference non-spine(s) and/or any rules executed on the non-spine. Furthermore, each spine node can be expanded to reveal its constituent parts, for example an associated spine-path, spine-range-selector-path, slice-path, history of rules executed on the spine and the like. The expanded or highlighted spine-node may reveal its corresponding spine-diagram and label its constituent parts. As such, in some cases, each geometry in the threedimensional scene can be traced back to the original input data, for example a spine-diagram
The analysis module 207 may be configured to import a specific sub-object into the three-dimensional workspace 209. The analysis module 207 may be configured to add, modify and/or delete a specific sub-object and/or trigger reconstruction of one or more top-level objects. Examples of such sub-objects include, but are not limited to a reference spine and a reference non-spine.
In some examples, the apparatus 201 comprises an evolution module 208. The evolution module 208 is configured to modify at least one feature of the three-dimensional model based at least in part on one or more evolution constraints. The evolution constraints may be user-specified.
The evolution module 208 is configured to use one or more evolutionary algorithms to modify the three-dimensional model to satisfy a specific requirement or a specified target. Evolutionary algorithms can be used to solve optimization problems using solutions inspired by biological evolution.
The evolution module 208 may be configured to apply an evolutionary algorithm as follows. A copy of the three-dimensional model can be mutated by means of a slight variation in one or more of its parameters or properties. This may generate a different three-dimensional geometry for that three-dimensional model. A fitness function may then be evaluated on mutated copies of the three-dimensional model. For example, the new shape of the geometry in the mutated copies of the three-dimensional model may be tested for how close it is to a targeted desired shape. If the fitness function reveals that the shape of the mutated geometry is sufficiently close for a given purpose, the evolution module 208 may determine that a desired result has been achieved and the process may stop.
Known three-dimensional model generation systems generally express the shape of objects in terms of vertices and faces. This limits the different ways in which mutation can be performed. By expressing shapes in terms of constituent parts (for example spine-paths), a large number of other intuitive parameters, creation history information and/or rule information, more meaningful mutations are enabled in the evolutionary algorithms. The mutations may be more meaningful in terms of shape, for example.
Changes in the one or more properties, one or more parameters and/or one or more constituent entities of a spine, non-spine or other workspace entity may be considered to be a 'mutation'. The representation of such a mutation will be referred to hereinafter as a 'mutation rule'.
An example of a mutation is a change in translation, rotation, scale, thickness and/or length of a spine. Another example of a mutation is a change in the slice-path and/or silhouette-outline-path associated with a spine. Another example of a mutation is a change in one or more constituent reference spines of a non-spine, which triggers reconstruction of the non-spine. Another example of a mutation is a Boolean operation on a spine or non-spine with another spine or non-spine. Another example of a mutation is a change in one or more rules and/or one or more parameters in one or more rules relating to a spine or non-spine, which reconstructs the shape of the spine or non-spine. Another example of a mutation is a change in one or more marker-points of a spine or non-spine. Another example of a mutation is a change in any entity (for example a marker-object) present in the three-dimensional workspace 209. Another example of a mutation is a genetic-crossover between the properties of two spines or non-spines. Another example of a mutation is a change in one or more user-defined variables, which in turn modify one or more parameters of one or more spines, one or more non-spines and/or one or more other entities in the three-dimensional workspace 209.
The change in one or more rules or one or more parameters in one or more rules relating to the spine or non-spine may be stored in creation history data.
The apparatus 201 may be configured to provide an option to a user to enable or disable one or more mutation rules during evolution of a three-dimensional model. This might enable a faster or more efficient way to arrive at an optimized solution by enabling use of mutation rules which are more likely to help towards optimization and/or preventing use of mutation rules which are less likely to help towards optimization.
The apparatus 201 may be configured to evaluate the fitness function itself In some examples, an interactive genetic algorithm may be used, whereby the evaluation is performed at least in part by a user and/or one or more other external entities.
The apparatus 201 may be configured to use one or more assumptions to speed up the evolutionary process. For example, the user might specify that the targeted geometry has bilateral symmetry about a specific three-dimensional plane.
In some examples, the mutation process is distributed across multiple processing devices for faster or otherwise preferred performance. Multiple mutated variants may be generated from a single mutable entity.
Figures 51A to 51C illustrate the execution of mutation rules on a three-dimensional model and the resultant effects on the shape of the geometry of the model.
Figure 51A shows a spine-diagram 5100 that contains embedded paths 5101, 5102, 5103 (triangular, circular and curved respectively) with appropriately named strings 5104, 5105, 5106 placed close to the corresponding paths. Each path 5101, 5102, 5103 has a start-marker (dot) and an end-marker (arrow). The circular path 5102 starts and ends at the same point. The top-left part of the spine-diagram 5100 shows a spine-path 5107 enclosed by a spine-range-selector-path 5108 associated with a square slice-path 5109 using an association-line 5110.
Figure 5IB shows the wireframe 5111 and shaded 5112 views of the three-dimensional geometry that is generated from the spine-diagram 5100. The generated data-structure from the spine-diagram 5100 also stores the three paths 5101, 5102, 5103 and assigns the relevant names 5104, 5105, 5106 to them.
In this example, there are five mutation rules. Mutation rule 1 is setting the number of spine-path segments to fifteen. Mutation rule 2 is setting the slice-path to PATH.circlel. Mutation rule 3 is setting the silhouette-outline-path to PATH.circlel. Mutation rule 4 is setting the silhouette-outline-path to PATH.trianglel. Mutation rule 5 is setting the spine-path to PATH.curvel.
Figure 51C shows a table that illustrates the change in the shape of the three-dimensional geometry on executing a mutation rule on its associated three-dimensional model. A wire-frame view, shaded view, mutation rule applied, and implied spine-diagram are shown in first, second, third and fourth the columns of the table respectively.
The first row of the table shows the initial shape of the geometry and the implied spine-diagram. This assumes that the spine-path is divided into two segments.
The second row of the table shows the changed vertices and faces in the generated geometry after executing mutation rule 1 to set the number segments of the spine-path to fifteen.
The third row of the table shows the changed shape of the geometry (now a cylinder) after executing mutation rule 2 to modify the slice-path to be the circular path 5102.
The fourth row of the table shows the changed shape of the geometry (now a sphere) after executing mutation rule 3 to introduce silhouette data corresponding to the circular path 5102.
The fifth row of the table shows the changed shape of the geometry (now a cone) after executing mutation rule 4 to change the silhouette-outline-path to be a triangular path 5101.
The sixth row of the table shows the changed shape of the geometry (now horn-shaped) after executing mutation rule 5 to change the spine-path to be a curved line-segment 5103. A fitness function may be used to evaluate the shape and/or one or more other properties of a given geometry and produce a computed result. The term 'mutated geometry' is used herein to describe a geometry generated as a result of mutation on its associated three-dimensional model, which may be associated with a spine or nonspine. An example of a fitness function is a difference between the volume of a mutated geometry and a targeted volume (for example of a targeted geometry). Another example of a fitness function is a quantified value of one or more aerodynamic properties of a mutated geometry. Another example of a fitness function is a suitability of the mutated geometry for a given purpose. Another example of a fitness function is a desirability of the shape of a given geometry, as chosen by a human during an interactive genetic algorithm procedure. For example, a fitness function may be able to determine a maximum output power of a mutated geometry when the mutated geometry is used a wind-turbine fan blade. For example, a fitness function utilizing interactive human input may be able to determine the comfort level and desirability of the shape of a chair-shaped geometry.
In some examples, evaluation of the fitness function results in a value and/or dataset. A threshold may be defined such that, when the evaluation result is within an acceptable range or threshold, the entity under evaluation is declared as a sufficiently good fit. This threshold can be specified in 'acceptance criteria'. In some examples, a limit can be set (for example by a user) on the maximum number of permitted mutation cycles during the evolution process. This may be beneficial in scenarios where convergence towards the target does not occur within a reasonable period of time. This number may be specified in the exit criteria. A spine or non-spine may be mutable towards a target three-dimensional geometry. This facilitates creating the target three-dimensional geometry in terms of spines and non-spines. In this case, a fitness function may be defined such that a difference between the target three-dimensional geometry and the mutated three-dimensional geometry is below a minimum threshold. For example, it may be desirable that the difference is as close to zero as possible. There are various different ways in which the difference may be calculated. Examples of the ways in which the difference may be calculated include, but are not limited to: comparing one or more rendered images of the mutated geometry and one or more rendered images of the target geometry using the same render settings; and determining the volume of the geometry created by the Boolean difference between the mutated geometry and the targeted geometry.
By expressing the target three-dimensional geometry in terms of spines and/or non-spines, the result is reusable, customizable and/or further evolvable.
Referring to Figure 52, there is shown a flowchart illustrating an example of a method of creating a three-dimensional model by evolution towards a specific three-dimensional geometry.
At 5201, a determination is made on the method of selecting one or more initial spines and/or non-spines to be mutated.
At 5202, data defining the determined one or more initial spines and/or nonspines to be mutated is obtained.
At 5203, one or more derived attributes are created from the obtained data. Examples of derived attributes include, but are not limited to, a volume and/or a depth map.
At 5204, data defining an evolution target is obtained, hi this example, the evolution target is a target three-dimensional geometry.
At 5205, one or more derived attributes are created from the data defining the target geometry. Examples of derived attributes include, but are not limited to, a volume and/or a depth map.
At 5206, one or more permitted mutation rules are identified, one or more acceptance criteria are identified and one or more exit criteria are identified.
At 5207, one or more fitness functions are defined. The one or more fitness functions may be based on a relationship (for example a difference) between the one or more derived attributes derived from the one or more initial spines and/or non-spines to be mutated and the data defining the target geometry.
At 5208, an evaluation is made using the one or more fitness functions as to a fitness value.
At 5209, if one or more acceptance criteria are satisfied, the one or more initial spines and/or non-spines represent a required result (5210) and evolution processing stops (5211).
At 5209, if when one or more acceptance criteria is not satisfied, then it is determined whether one or more exit criteria are satisfied (5212).
If the one or more exit criteria are satisfied (5212), then evolution processing stops (5211).
If the one or more exit criteria are not satisfied (5212), then a mutation operation (5213) is performed on the initial spines and/or non-spines and/or one or more further spines and/or non-spines. The further spines and/or non-spines may be selected from an existing repository of spines and non-spines, selected/created using a predetermined method, or selected/created directly by a human user. Processing returns to 5203, where one or more derived attributes are created from the mutated geometry or geometries.
Some drawbacks to known 3D-scanning techniques include, but are not limited to: holes in the generated three-dimensional content owing to occlusion or light scattering; the generated three-dimensional model from a 3D scanner being non-parametric (for example it may comprise a cloud of points rather than a complete model); the possible symmetry of an object not being taken into account when fixing missing data; and significant post-processing being required to transform the output from a 3D scanner to a usable state.
In some examples, three-dimensional geometry shape generation may be achieved using a best fit approach based on available data. Examples of available data include, but are not limited to: depth-maps at given angles; and bilateral symmetry. A depth-map contains information about the relative locations of points in a three-dimensional object with reference to a view point.
The evolution module 208 may be configured to generate an approximated three-dimensional shape in response to receiving input depth-map data of a three-dimensional object as follows.
The evolution module 208 may receive input information on one or more of the following: environment details, object symmetry assumptions, camera location and/or camera properties, and other data used in creation of the input depth-map. If this information is not provided, these properties may be approximately estimated or may be determined from a predefined template.
One or more spines or non-spines are generated and/or selected. This may be based on user input, from a pre-defined template and/or or from among a library of spines and non-spines. The selection may be based on how close the shape of the spines and non-spines are to the targeted shape.
The selected spines or non-spines are mutated and placed in a three-dimensional scene to create a three-dimensional geometry. A depth map of the three-dimensional geometry is created using settings similar to those used in creating the input depth map. When multiple depth-maps are provided as input, a set of depth-maps for the mutated geometry are created with the corresponding camera and/or other settings. A fitness function may be defined such that the optimum solution is the mutated three-dimensional shape that produces a depth map closest to the corresponding input depth map. When multiple depth-maps from different camera angles are provided as input, the fitness function may take each paired image (input depth-map and depth-map of mutated geometry) and quantify the overall fitness, for example as a cumulative sum.
One or more checks may be introduced for example to limit the number of mutation cycles, to restrict the types of permitted mutation rules and/or to set the threshold for acceptance.
The evolution module 208 may be configured to accept input regarding the target geometry and adjust a mutated geometry to match this constraint. An example of such input includes, but is not limited to the existence of bilateral symmetry.
In general, the higher the resolution of the depth-map, the better the achievable resolution of the generated three-dimensional shape.
Referring to Figure 53, there is shown a flowchart illustrating an example of a method of creating a three-dimensional model from depth-map data or partial 3D scan data. 5301 to 5313 correspond closely to 5201 to 5213 described above with reference to Figure 52. At 5304, depth-map data and, where available, corresponding depth-map metadata are obtained. At 5305, depth-map metadata is estimated where not available at 5304.
The evolution module 208 may be configured to approximate the three-dimensional shape of an object based on one or more rendered images of a mutated geometry. A user may be able to input metadata relating to the rendered image. For example, the user may be able to provide information on light source properties, environment details, object texture details, camera location, camera properties, Tenderer details and other data used in creation of the initial rendered image. If this information is not provided, these properties may be approximately estimated or may be determined from a predefined template.
The mutation rules may include modification of the texture and/or shape of the geometry. The mutation rules may include the addition, deletion and/or editing of lights in the three-dimensional scene.
One or more spines or non-spines may be mutated and placed in a three-dimensional scene. A rendered image of the three-dimensional geometry may be created using similar rendering techniques, render settings, environment details and/or camera parameters to those used for the initial render. When multiple rendered images are provided as input, a set of renderings of the mutated geometry may be created with the corresponding render settings and/or other settings. A fitness function may be defined such that the optimum solution is the mutated three-dimensional shape that produces a rendered image closest to the corresponding input rendered image. When multiple rendered images are provided as input, the fitness function may take each paired image (input render and corresponding render of the mutated geometry) and quantify the overall fitness, for example as a cumulative sum.
One or more checks may be introduced for example to limit the number of mutation cycles, restrict the types of permitted mutation rules and/or set the threshold for acceptance.
The evolution module 208 may be configured to accept input regarding the target geometry and adjust a mutated geometry to match this constraint. An example of such input includes, but is not limited to the existence of bilateral symmetry.
In general, the higher the resolution of the input rendered image, the better the achievable resolution of the generated three-dimensional shape.
Referring to Figure 54, there is shown a flowchart illustrating an example of a method of creating a three-dimensional model from a rendered image of a three-dimensional object. 5401 to 5413 correspond closely to 5201 to 5213 described above with reference to Figure 52 and 5301 to 5313 described above with reference to Figure 53. At 5404, rendered image data and, where available, corresponding rendered image metadata are obtained. At 5405, rendered image metadata is estimated where not available at 5404.
The evolution module 208 may be configured to generate a three-dimensional model from one or more photographs of a physical (or ‘real-world’) object. The evolution module may be configured to treat the one or more photographs as a physically-based rendered image of the image and then proceeding as outlined above.
The user may input metadata relating to the one or more photographs. For example, the user may be able to identify one or more light source properties, environment details, object texture details, one or more camera location details and/or one or more camera properties used in creation of the one or more photographs. If this information is not provided, these properties may be approximately estimated or may be determined from a predefined template.
The mutation rules may include modification of the texture and/or shape of the geometry. The mutation rules may include the addition and/or editing of lights in the three-dimensional scene.
One or more spines or non-spines may be mutated and placed in a three-dimensional scene. A rendered image of the resulting three-dimensional geometry may be created using similar photographic settings, environment details and/or camera parameters to those used in relation to the initial one or more photographs.
This technique performs well if the rendered image is created using a physically based rendering software. Physically based rendering software applications create the rendered image of an object based on the physical behaviour of light (for example radiosity). Physically-based rendering may generate highly realistic views of the object, for example photorealistic rendered images. However, another type of Tenderer may be used, for example one that produces rendered images of adequate photorealism.
When multiple photographs are provided as input, a set of renderings of the mutated geometry are created with the corresponding photographic and/or render settings. The evolution module 208 may be configured to try to ensure that each constraint is simultaneously fulfilled. In other words, the evolution module 208 may be configured to try to ensure that each rendered image is as close as possible to the corresponding initial photograph, given that particular camera location, camera property, environment details and/or light setting constraint. A fitness function may be defined such that an optimum solution is the mutated three-dimensional shape that produces the closest rendered image to the initial corresponding photograph. If multiple photographs are provided as input, the fitness function might take each paired image (photograph and corresponding render of the mutated geometry) and quantify the overall fitness, for example as a cumulative sum.
One or more checks may be introduced for example to limit the number of mutation cycles, restrict the types of permitted mutation rules and/or set the threshold for acceptance.
The evolution module 208 may be configured to accept input regarding the target geometry and adjust a mutated geometry to match this constraint. An example of such input includes, but is not limited to, the existence of bilateral symmetry.
In general, the higher the resolution of the one or more input photographs, the better the achievable resolution of the generated three-dimensional shape.
Referring to Figure 55, there is shown a flowchart illustrating an example of a method of creating a three-dimensional model from a photograph of a physical object. 5501 to 5513 correspond closely to 5201 to 5213 described above with reference to Figure 52, and 5301 to 5313 described above with reference to Figure 53 and 5401 to 5413 described above with reference to Figure 54. At 5504, photograph data and, where available, corresponding photograph metadata are obtained. At 5505, photograph metadata is estimated where not available at 5504.
Various measures (for example, an apparatus a method, a computer program and a non-transitory computer-readable storage medium) to be used in generating a three-dimensional model of an object are provided. An input module is configured to obtain input data defining a plurality of line-segments in a two-dimensional space. The plurality of line-segments includes a first line-segment that represents a slice-path. The plurality of line-segments includes a second line-segment that represents a spine-path. A three-dimensional model generator module is configured to generate the three-dimensional model of the object using data derived from the first line-segment and the second line-segment. A user may experience a less steep learning curve and/or a more intuitive interfaces for basic-workflow. Creating a three-dimensional model may require less effort and time than in known three-dimensional modelling environments, for example to input data and/or modification commands in a format the software application can understand. On-demand customisation of a shape and/or resolution of a generated geometry may be less complicated and/or time intensive.
The techniques described above may require significantly less data from the user to construct a three-dimensional model and may provide an easier and a faster method of input compared to known three-dimensional modelling systems.
Three-dimensional shape creation is performed using relatively little effort and input from the user. The user input to create the spine three-dimensional shapes may be in the form of one or more spine-diagrams, which comprise a combination of two-dimensional strokes and/or or shapes. Compared to known three-dimensional modelling systems, which involve navigating in three-dimensional space to create a three-dimensional model, providing the spine-diagram as input is through two-dimensional drawings or other representation of two-dimensional drawings. This may make input easier and faster. Besides SVG data and raster-image input, input may be in the form of a photograph and/or a scanned image of a spine-diagram drawn on physical media like paper. This may be a further convenience and time-saving feature for a user.
In some embodiments, one or both of the first and second line-segments are hand-drawn.
In some embodiments, one or both of the first and second line-segments are completely contained within the two-dimensional space.
In some embodiments, the input module is configured to obtain at least some of the input data via an input device selected from the group consisting of a digital drawing surface, a keyboard, a computer mouse, a motion-capture device, a touch-sensitive display screen and an image-capture device.
In some embodiments, the input data comprises one or more of image data and text data.
In some embodiments, the three-dimensional model generator module comprises an input parser module configured to process the input data and output the data derived from the first line-segment and the second line-segment.
In some embodiments, the input parser module is configured to recognise that the first line-segment represents a slice-path and that the second line-segment represents a spine-path based at least in part on information in the input data.
In some embodiments, the three-dimensional model generator module comprises a geometry generator module configured to generate the three-dimensional model using the data derived from the first line-segment and the second line-segment.
In some embodiments, the three-dimensional model generator module comprises a post-processor module configured to perform one or more post-processing operations.
In some embodiments, the input data comprises data associated with one or more marker-objects. The one or more post-processing operations comprise defining at least one attribute of one or more marker-objects using data derived from the input data. The one or more post-processing operations may comprise defining at least one attribute of one or more marker-objects using data derived from the data associated with the one or more marker-objects. A generated three-dimensional model may be reusable. The generated three-dimensional model may have one or more embedded, addressable reference points in three-dimensional space. This may facilitate automatic and/or manual movement and/or orientation of a geometry with respect to the location and/or orientation of another geometry in a three-dimensional environment.
The techniques described above may facilitate relatively easy assembly, placement and/or orientation of geometries in a three-dimensional workspace. Using marker-points, marker-vector, marker-planes and/or marker-polygons, multiple geometries may be oriented and/or assembled by inter-association of marker-points and marker-vectors and paths.
In some embodiments, a modification and assembly module is configured to perform one or more modification operations and/or one or more assembly operations.
In some embodiments, the one or more modification operations and/or one or more assembly operations comprise adjusting a resolution of at least part of the three-dimensional model.
The techniques described above may provide dynamic resolution control. The level of detail of the three-dimensional geometry associated with a spine may be controlled by changing a number and/or location of spine-path segments, spine-path segment points and/or slice boundary points. The level of detail of the three-dimensional geometry associated with a non-spine may be controlled by changing a number and/or location of spine-path segments, spine-path segment points and/or slice boundary points of one or more constituent spines and regenerating the non-spine geometry. Dynamic resolution control may differ from known three-dimensional software applications because a reduction in the vertices, faces and/or triangles in known systems is performed such that the intended shape is preserved as much as possible. When the number of spine-path segment and slice boundary points in a spine is relatively high, the three-dimensional geometry associated with the spine more closely resembles its intended shape. This may be beneficial as 3D-printers may accept input in the form of a triangulated three-dimensional geometry comprising a number vertices and triangular faces. As such, prior to being 3D-printed, a user may alter a three-dimensional shape by setting the level of resolution required.
The techniques described above may provide selective regional resolution control. When a user navigates through three-dimensional space in a GUI as described above that uses a system of spines and non-spines, the user may be able to change dynamically the resolution of a part of a geometry depending on a camera position and/or camera viewport. For example, if user zooms into, or camera points to, a particular three-dimensional geometry and/or a region of a three-dimensional geometry, the three-dimensional geometry and/or the region of the three-dimensional geometry that is viewable through the camera can be set to a higher resolution, for example by changing its number of spine-path segments and/or boundary points at that region. This allows savings in computer memory, allowing more detailed geometry' to be visible to the user using a computing device with relatively lower memory. This may also be used to control resolution selectively of a defined region of a three-dimensional geometry, not necessarily for three-dimensional viewing purposes, but, for example, to get sharper edges and/or smoother surfaces in a specific region in a three-dimensional geometry.
In some embodiments, a scripting module is configured to execute one or more scripting operations.
In some embodiments, an analysis module is configured to perform one or more analysis operations.
In some embodiments, the one or more analysis operations comprise analysing a creation history of three-dimensional model and generating creation history metadata based at least in part on said analysis.
The generated three-dimensional model may be stored in a format such that it may be possible to retrace some or all the steps carried out by an original creator to create the model and/or to or extract the original input given by the original creator to create the model.
In some embodiments, an evolution module is configured to perform one or more evolution operations.
In some embodiments, the one or more evolution operations comprise modifying at least one attribute of the three-dimensional model based at least in part on one or more target attributes for the three-dimensional model.
The techniques described above may facilitate generation of evolvable three-dimensional models. Three-dimensional geometries generated by the evolution module 208 may solve the known problem of raw point clouds and/or holes that occur when creating three-dimensional models using 3D-scanning techniques. A limited number of mutations or slight variation in data in a three-dimensional model may be used to produce non-identical geometries. Techniques described above also provide the ability to enable mutation for fitness optimisation, for example by varying one or more parameters of two-dimensional paths, thickness attributes, length attributes, displacement-map, hole-map, marker-map and/or rule values.
In some embodiments, the one or more target attributes are derived from data selected from the group consisting of target three-dimensional geometry data, depth-map data, rendered image data and photograph data.
In some embodiments, a management module is configured to perform one or more management operations.
In some embodiments, the plurality of line-segments comprises at least one further line-segment.
In some embodiments, the geometry generator module is configured to generate the three-dimensional model of the object using data derived from the at least one further line-segment in addition to said data derived from the first line-segment and the second line-segment.
In some embodiments, the at least one further line-segment represents a primitive selected from the group consisting of a spine-range-selector-path, a slice-path, a silhouette-outline-path, a width-line associated with a silhouette-outline-path, a z-path and an association-line.
In some embodiments, a file exporter module is configured to generate a data file representing at least part of the three-dimensional model.
In some embodiments, the file exporter module is configured to generate the data file in a 3D-printable format. A three-dimensional model generated using the apparatus 201 described herein may be exported to a single image file or SVG file. The file may embed some or all of the relevant spine-diagrams and/or some or all of the rules to be executed on the geometries generated by the spine diagrams. Such an exported file, when processed by the geometry generator module, may be used to re-create an originally exported three-dimensional model.
In some embodiments, a system comprises such an apparatus and a 3D-printer configured to create a physical object based at least in part on the three-dimensional model of the object.
Various measures (for example, an apparatus a method, a computer program and a non-transitory computer-readable storage medium) to be used in generating a three-dimensional model of an object are provided. In some examples, an input module is configured to obtain input data. The input data may take various different forms as described above in detail. In some examples, the input data defines at least one item in a two-dimensional space. In some examples, the at least one item comprises one or more line-segments. In some examples, the at least one item comprises one or more paths. In some examples, the at least one item comprises one or more shapes. In some examples, one or more of the at least one items represents a slice-path. In some examples, one or more of the at least one items represents a spine-path. In some examples, a three-dimensional model generator module is configured to generate the three-dimensional model of the object using data derived from the input data. A modelling environment is described above in which three-dimensional models can be generated based on a combination of one or more of two-dimensional strokes or paths, scalable vector graphics data, image data and computer-readable instructions provided by the user. One or more specific points in three-dimensional space may be defined relative to the geometry of the generated three-dimensional model. The location of the points may be dynamically calculated and used in the assembly of the geometry of the generated three-dimensional model in three-dimensional space. The three-dimensional models may be parameterized, reusable, customizable, 3D-printable, and may be easily and intuitively created and editable and/or may contain creation history information. A three-dimensional shape may also be evolved towards a target three-dimensional shape and/or more optimal shape for a given fitness function. A three-dimensional model that is assembled from multiple three-dimensional model parts may comprise information to enable it to be reconstructed depending on one or more variables affecting the shape, resolution and/or one or more other attributes of its constituent part(s).
In known three-dimensional model generation systems, models are generally stored and/or exported in a format that describes vertex points, faces that connect the vertex points, and sometimes surface data along with some metadata. Such a representation does not embed higher-level parameters that control an overall intended shape of the three-dimensional geometry. Since there are minimal or no parameterized variables affecting the shape of the three-dimensional geometry in known systems, it is difficult to change the shape of the geometry meaningfully. This impedes the flexibility and ability to efficiently evolve a three-dimensional geometry towards a desired target three-dimensional shape and/or towards a three-dimensional shape that satisfies one or more specified criteria. For example, when a three-dimensional model of a banana is created in a known system, there may be no parameters that can easily and meaningfully control the thickness or length of the three-dimensional banana geometry. A model created using the techniques described above may be decomposable, reusable and/or editable. A model created using the techniques described above may be thought of at the top of a tree structure in which one or more leaf nodes are spine-diagrams. It may be possible to decompose the model to its constituent one or more spine-diagrams and view the steps performed and/or rules executed to create an associated geometry from one or more spine-diagrams. This availability of full model creation history enables easy and efficient collaboration, customisation and/or creation of derived geometries. It may be possible to express any three-dimensional shape in terms of one or more spine-diagrams and one or more rules. One or more primitives of a spine may be modified. It may be possible to delete, insert and/or modify one or more sub-geometries of a generated geometry and reconstruct a higher-level geometry using the analysis module 207.
The techniques described above may provide improved collaboration and reusability of models. Shareable models and/or paths may result in a marketplace for collaborative construction of complex models. A repository of such models, and/or named paths, may be created for use in collaborative work. Users connected to a network may be able to perform concurrent creation and/or modification of three-dimensional models in the same workspace.
The techniques described above may facilitate dynamic rule evaluation, runtime modification and/or assembly. The apparatus may be configured to re-evaluate one or more rules associated with a spine or non-spine at runtime and/or on demand. This allows interdependent geometries to correct their position, orientation and/or one or more other properties based on the updated location of the geometries on which they are dependent. An adjustable thickness, length, resolution, orientation, or the like may be made dependent on an attribute (for example a corresponding attribute) of another geometry. Runtime modification of one or more attributes (for example a thickness, length, spine-path, slice-path, silhouette-outline-path, displacement-map or the like) may be available. The apparatus may be configured to create smart-objects, such as shapes that are dependent on one or more embedded variables. The apparatus may be configured to perform dynamic and/or automatic shape change based on one or more variable values of the geometry, for example by reconstructing the geometry with one or more new variable values. For example, a three-dimensional model of a table may be made such that its height is dependent on a variable, such as user height.
The techniques described above may facilitate efficient storage. When stored as a spine data-structures, the storage space may be less than an amount of storage that would be required to store a three-dimensional model in terms of vertex and face meshes, especially for high-resolution shapes. The storage space may be the same for a spine irrespective of a mesh resolution where the three-dimensional geometry is generated at runtime by parsing the spine and/or non-spine data-structures and rule execution.
The techniques described above may facilitate auto-generation and/or customisation of virtual scenes and calculation of derived data. Multiple geometries created using the apparatus may be combined with each other to create a virtual scene. For example, a three-dimensional model of a room may be made by assembling three-dimensional geometries of furniture and walls. Each of the constituent geometries may be parameterized; for example, they may be the result of one or more rules executed on one or more spine-diagrams. This makes it relatively easy to alter, replace and/or customize the scene geometries and/or derive information from the scene programmatically. Parameterized geometries in the assembled virtual scene facilitate extraction of derived information, for example calculating the height of the room, calculating the surface area of walls for paint coverage and the like.
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged.
For example, in addition to using spines and/or non-spines to represent three-dimensional model data, the apparatus may be configured to allow importing of plain three-dimensional geometry data to the three-dimensional workspace. Examples of plain three-dimensional geometry data include, but are not limited to, vertices and faces. The apparatus may be able to treat such imported three-dimensional geometry data as a non-spine.
In other examples, the spine-diagram may contain a one to one mapping of silhouette-outline-paths to spine-range-selector-paths through association-lines. In such examples, the input parser module 701 and geometry generator module 702 may be configured to associate the calculated width-at-segments of each of the silhouettes to the corresponding spine-path segment points.
In some examples described above, locations of one or more slice boundary points are calculated along the perimeter of the slice-path during processing of the slice-path shape by placing the slice boundary points at equal intervals along the boundary.
In other examples, the slice boundary points may be distributed along the slice-path perimeter at unequal intervals. For example, where the deviation at a point on the path is relatively high (for example indicated by an angle of a curve derivative at the point), a relatively high number of slice boundary points can be located around that point. In another example, boundary point placement may be performed by the user by specifying the desired location(s) of some or all of the boundary points along the slice-path. In another example, an optimisation on the number and location of slice boundary points can be performed such that the slice boundary points are placed in a way that they form a shape closest to the slice shape. For example a requirement could be set that the total area enclosed by the shape formed by joining the boundary points should be at least a desired proportion (for example 70%) of the total slice-path area. A desired parametric modification of the spine's geometry, during the parsing stage, may be possible by providing one or more modification parameters in the input data (for example an SVG file) and creating an association between the spine-path or other primitive and the one or more modification parameters.
In addition, or as an alternative, to using association-lines, association between primitives may be based on other association factors. Examples of such association factors include, but are not limited to: SVG-grouping of associated primitives and assigning similar SVG properties to associated primitives.
The evolution module 208 may be configured to derive the shape of a region of a target geometry instead of the whole target geometry. This may improve efficiency (for example by reducing the speed) of the evaluation of one or more applicable fitness functions. The generated shapes of multiple regions of the target geometry may be combined to produce a larger part of, or the whole of, the target geometry. A user may be provided with an option to view and/or modify user-defined variables, for example in a GUI. Any change in such variables may trigger a change in the shape of the three-dimensional geometry by affecting one or more spine parameters, non-spine parameters and/or rules.
One or more simultaneous constraints may be introduced in addition to sequentially executed rules. A simultaneous constraint may require that any operation on a spine and/or non-spine take place only if an active constraint is not violated. This may trigger a different three-dimensional geometry in the generated spine or non-spine.
The apparatus may be configured to create one or more spines and/or one or more non-spines by importing three-dimensional model data in an appropriate format. Example formats may include, but are not limited to, STereoLithography ( stl) and Object (.obj). The apparatus may be configured to deduce a combination of one or more spines and one or more modifier rules based on which the imported spine and/or non-spine can be generated. This may, for example, involve using the evolution module 208 on one or more generic and/or simple spines. In such cases, the imported three-dimensional model may be associated with a list of creation steps that can be traced back to input data, for example one or more spine-diagrams.
In some examples described above, the geometry generator module 702 is configured to derive information such as width-at-segment-point data, boundary point data and the like from one or more data-structures obtained from the input parser module 701. In some examples, a user may provide such information directly to the geometry generator module 702. For example, the user may be able to provide width-at-segment-point data, boundary point data and the like to the geometry generator module 702 directly in text or another format.
Some acts have been described above as discrete acts. However, multiple acts may be combined into a single operation and/or an act may be carried out in multiple operations. Some acts have been described above as taking place in a given order. However, at least some such acts may be performed in a different order than that described above.
It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims (59)

1. An apparatus configured to generate a three-dimensional model of an object, the apparatus comprising: an input module configured to obtain input data defining a plurality of line-segments in a two-dimensional space, the plurality of line-segments comprising a first line-segment representing a slice-path and a second line-segment representing a spine-path; and a three-dimensional model generator module configured to generate the three-dimensional model of the object using data derived from the first line-segment and the second line-segment.
2. An apparatus according to claim 1, wherein one or both of the first and second line-segments are hand-drawn.
3. An apparatus according to claim 1 or 2, wherein one or both of the first and second line-segments are completely contained within the two-dimensional space.
4. An apparatus according to any of claims 1 to 3, wherein the input module is configured to obtain at least some of the input data via an input device selected from the group consisting of: a digital drawing surface; a keyboard; a computer mouse; a touchpad; a motion capture device; a touch-sensitive display screen; and an image-capture device.
5. An apparatus according to any of claims 1 to 4, wherein the input data comprises one or both of image data and text data.
6. An apparatus according to any of claims 1 to 5, wherein the three-dimensional model generator module comprises an input parser module configured to: process the input data; and output said data derived from the first line-segment and the second line-segment.
7. An apparatus according to claim 6, wherein the input parser module is configured to recognise that the first line-segment represents a slice-path and that the second line-segment represents a spine-path based at least in part on information in the input data.
8. An apparatus according to any of claims 1 to 7, wherein the three-dimensional model generator module comprises a geometry generator module configured to generate said three-dimensional model using the data derived from the first line-segment and the second line-segment.
9. An apparatus according to any of claims 1 to 8, wherein the three-dimensional model generator module comprises a post-processor module configured to perform one or more post-processing operations.
10. An apparatus according to claim 9, wherein the input data comprises data associated with one or more marker-objects and wherein the one or more postprocessing operations comprises defining at least one attribute of one or more marker-objects using data derived from the input data.
11. An apparatus according to any of claims 1 to 10, comprising a modification and assembly module configured to perform one or more modification operations and/or one or more assembly operations.
12. An apparatus according to claim 11, wherein the one or more modification operations and/or one or more assembly operations comprise adjusting a resolution of at least part of the three-dimensional model.
13. An apparatus according to any of claims 1 to 12, comprising a scripting module configured to execute one or more scripting operations.
14. An apparatus according to any of claims 1 to 13, comprising an analysis module configured to perform one or more analysis operations.
15. An apparatus according to claim 14, wherein the one or more analysis operations comprise: analysing a creation history of three-dimensional model; and generating creation history metadata based at least in part on said analysis.
16. An apparatus according to any of claims 1 to 15, comprising an evolution module configured to perform one or more evolution operations.
17. An apparatus according to claim 16, wherein the one or more evolution operations comprise modifying at least one attribute of the three-dimensional model based at least in part on one or more target attributes for the three-dimensional model.
18. An apparatus according to claim 17, wherein the one or more target attributes are derived from data selected from the group consisting of: target three-dimensional geometry data; depth-map data rendered image data; and photograph data.
19. An apparatus according to any of claims 1 to 20, comprising a management module configured to perform one or more management operations.
20. An apparatus according to any of claims 1 to 19, wherein the plurality of line-segments comprises at least one further line-segment.
21. An apparatus according to claim 20, wherein the geometry generator module is configured to generate the three-dimensional model of the object using data derived from the at least one further line-segment in addition to said data derived from the first line-segment and the second line-segment.
22. An apparatus according to claim 20 or 21, wherein the at least one further line-segment represents a primitive selected from the group consisting of a spine-range-selector-path, a slice-path, a silhouette-outline-path, a width-line associated with a silhouette-outline-path, a z-path and an association-line.
23. An apparatus according to any of claims 1 to 22, comprising a file exporter module configured to generate a data file representing at least part of the three-dimensional model.
24. An apparatus according to claim 36, wherein the file exporter module is configured to generate the data file in a 3D-printable format.
25. A system comprising: an apparatus according to any of claims 1 to 24; and a 3D-printer configured to create a physical object based at least in part on the three-dimensional model of the object.
26. A method of generating a three-dimensional model of an object, the method comprising: obtaining input data defining a plurality of line-segments in a two-dimensional space, the plurality of line-segments comprising a first line-segment representing a slice-path and a second line-segment representing a spine-path; and generating the three-dimensional model of the object using data derived from the first line-segment and the second line-segment.
27. A method according to claim 26, wherein one or both of the first and second line-segments are hand-drawn.
28. A method according to claim 26 or 27, wherein one or both of the first and second line-segments are completely contained within the two-dimensional space.
29. A method according to any of claims 26 to 28, comprising obtaining at least some of the input data via an input device selected from the group consisting of: a digital drawing surface; a keyboard; a computer mouse; a touchpad; a motion capture device; a touch-sensitive display screen; and an image-capture device.
30. A method according to any of claims 26 to 29, wherein the input data comprises one or both of image data and text data.
31. A method according to any of claims 26 to 30, comprising: processing the input data; and outputting said data derived from the first line-segment and the second line-segment.
32. A method according to 31, comprising recognising that the first line-segment represents a slice-path and that the second line-segment represents a spine-path based at least in part on information in the input data.
33. A method according to any of claims 26 to 32, comprising performing one or more post-processing operations.
34. A method according to claim 33, wherein the input data comprises data associated with one or more marker-objects and wherein performing the one or more post-processing operations comprises defining at least one attribute of one or more marker-objects using data derived from the input data.
35. A method according to any of claims 26 to 34, comprising performing one or more modification operations and/or one or more assembly operations.
36. A method according to claim 35, wherein performing the one or more modification operations and/or one or more assembly operations comprises adjusting a resolution of at least part of the three-dimensional model.
37. A method according to any of claims 26 to 36, comprising executing one or more scripting operations.
38. A method according to any of claims 26 to 37, comprising performing one or more analysis operations.
39. A method according to claim 38, wherein performing the one or more analysis operations comprises: analysing a creation history of three-dimensional model; and generating creation history metadata based at least in part on said analysis.
40. A method according to any of claims 26 to 39, comprising performing one or more evolution operations.
41. A method according to claim 40, wherein performing the one or more evolution operations comprises modifying at least one attribute of the three-dimensional model based at least in part on one or more target attributes for the three-dimensional model.
42. A method according to claim 41, wherein the one or more target attributes are derived from data selected from the group consisting of: target three-dimensional geometry data; depth-map data rendered image data; and photograph data.
43. A method according to any of claims 26 to 42, comprising performing one or more management operations.
44. A method according to any of claims 26 to 43, wherein the plurality of line-segments comprises at least one further line-segment.
45. A method according to claim 44, wherein generating the three-dimensional model of the object comprises using data derived from the at least one further line-segment in addition to said data derived from the first line-segment and the second line-segment.
46. A method according to claim 44 or 45, wherein the at least one further line-segment represents a primitive selected from the group consisting of a spine-range-selector-path, a slice-path, a silhouette-outline-path, a width-line associated with a silhouette-outline-path, a z-path and an association-line.
47. A method according to any of claims 26 to 46, comprising generating a data file representing at least part of the three-dimensional model.
48. A method according to claim 47, comprising generating the data file in a 3D-printable format.
49. A method according to any of claims 26 to 48 comprising creating a physical object based at least in part on the three-dimensional model of the object.
50. A computer program arranged when executed to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data defining plurality of line-segments in a two-dimensional space, plurality of line-segments comprising a first line-segment representing a slice-path and a second line-segment representing a spine-path; and generating the three-dimensional model of the object using data derived from the first line-segment and the second line-segment.
51. A non-transitory computer-readable storage medium comprising computer-executable instructions which, when executed by a processor, cause a computing device to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data defining a plurality of line-segments, the plurality of line-segments in a two-dimensional space, the plurality of line-segments comprising a first line-segment representing a slice-path and a second line-segment representing a spine-path; and generating the three-dimensional model of the object using data derived from the first line-segment and the second line-segment.
52. An apparatus configured to generate a three-dimensional model of an object, the apparatus comprising: an input module configured to obtain input data; a three-dimensional model generator module configured to generate the three-dimensional model of the object using data derived from the input data; and an evolution module configured to modify at least one feature of the three-dimensional model of the object based at least in part on one or more target attributes for the three-dimensional model.
53. A computer-implemented method of generating a three-dimensional model of an object, the method comprising: obtaining input data; generating the three-dimensional model of the object using data derived from the input data; and modifying at least one feature of the three-dimensional model of the object based at least in part on one or more target attributes for the three-dimensional model.
54. A computer program arranged when executed to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data; generating the three-dimensional model of the object using data derived from the input data; and modifying at least one feature of the three-dimensional model of the object based at least in part on one or more target attributes for the three-dimensional model.
55. A non-transitory computer-readable storage medium comprising computer-executable instructions which, when executed by a processor, cause a computing device to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data; generating the three-dimensional model of the object using data derived from the input data; and modifying at least one feature of the three-dimensional model of the object based at least in part on one or more target attributes for the three-dimensional model.
56. An apparatus configured to generate a three-dimensional model of an object, the apparatus comprising: an input module configured to obtain input data, the input data comprising data associated with one or more marker-objects; a three-dimensional model generator module configured to generate the three-dimensional model of the object using data derived from the input data; and a post-processor module configured to perform one or more post-processing operations in relation to the three-dimensional model of the object, wherein the one or more post-processing operations comprises defining at least one marker-object attribute of one or more marker-objects using data derived from the input data.
57. A computer-implemented method of generating a three-dimensional model of an object, the method comprising: obtaining input data, the input data comprising data associated with one or more marker-objects; generating the three-dimensional model of the object using data derived from the input data; and performing one or more post-processing operations in relation to the three-dimensional model of the object, wherein the one or more post-processing operations comprises defining at least one marker-object attribute of one or more marker-objects using data derived from the input data.
58. A computer program arranged when executed to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data, the input data comprising data associated with one or more marker-objects; generating the three-dimensional model of the object using data derived from the input data; and performing one or more post-processing operations in relation to the three-dimensional model of the object, wherein the one or more post-processing operations comprises defining at least one marker-object attribute of one or more marker-objects using data derived from the input data.
59. A non-transitory computer-readable storage medium comprising computer-executable instructions which, when executed by a processor, cause a computing device to perform a method of generating a three-dimensional model of an object, the method comprising: obtaining input data, the input data comprising data associated with one or more marker-objects; generating the three-dimensional model of the object using data derived from the input data; and performing one or more post-processing operations in relation to the three-dimensional model of the object, wherein the one or more post-processing operations comprises defining at least one marker-object attribute of one or more marker-objects using data derived from the input data.
GB1513264.0A 2015-07-28 2015-07-28 Apparatus, methods, computer programs and non-transitory computer-readable storage media for generating a three-dimensional model of an object Withdrawn GB2540791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1513264.0A GB2540791A (en) 2015-07-28 2015-07-28 Apparatus, methods, computer programs and non-transitory computer-readable storage media for generating a three-dimensional model of an object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1513264.0A GB2540791A (en) 2015-07-28 2015-07-28 Apparatus, methods, computer programs and non-transitory computer-readable storage media for generating a three-dimensional model of an object

Publications (2)

Publication Number Publication Date
GB201513264D0 GB201513264D0 (en) 2015-09-09
GB2540791A true GB2540791A (en) 2017-02-01

Family

ID=54106720

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1513264.0A Withdrawn GB2540791A (en) 2015-07-28 2015-07-28 Apparatus, methods, computer programs and non-transitory computer-readable storage media for generating a three-dimensional model of an object

Country Status (1)

Country Link
GB (1) GB2540791A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210316510A1 (en) * 2020-03-25 2021-10-14 Opt Industries, Inc. Systems, methods and file format for 3d printing of microstructures
WO2022056036A3 (en) * 2020-09-11 2022-05-05 Apple Inc. Methods for manipulating objects in an environment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179297B (en) * 2018-11-13 2023-09-19 北京航空航天大学 Multi-contour generation method, device and system of point cloud
CN115205472B (en) * 2022-09-16 2022-12-02 成都国星宇航科技股份有限公司 Grouping method, device and equipment for live-action reconstruction pictures and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Gingold et al, 'Structured annotations for 2D-to-3D Modeling', Proc. of ACM SIGGRAPH Asia 2009, Vol 28, Issue 5, Dec 2009, Article No 148 *
Lopez-Tovar et al, 'Learning Sketch-based 3D Modelling from user's sketching gestures', 19th International Conf on Intelligent User Interfaces (Sketch: Pen and Touch Recognition Workshop), 2014 *
Naya et al, 'Direct Modeling: from Sketches to 3D Models', Proc. 1st Ibero-American Symposium in Computer Graphics SIACG, 2002, pgs 109-117 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210316510A1 (en) * 2020-03-25 2021-10-14 Opt Industries, Inc. Systems, methods and file format for 3d printing of microstructures
US11567474B2 (en) * 2020-03-25 2023-01-31 Opt Industries, Inc. Systems, methods and file format for 3D printing of microstructures
US11681269B2 (en) 2020-03-25 2023-06-20 Opt Industries, Inc. Systems, methods and file format for 3D printing of microstructures
WO2022056036A3 (en) * 2020-09-11 2022-05-05 Apple Inc. Methods for manipulating objects in an environment

Also Published As

Publication number Publication date
GB201513264D0 (en) 2015-09-09

Similar Documents

Publication Publication Date Title
Karpenko et al. Free‐form sketching with variational implicit surfaces
Tai et al. Prototype modeling from sketched silhouettes based on convolution surfaces
Li et al. Interactive cutaway illustrations of complex 3D models
JP7343963B2 (en) Dataset for learning functions that take images as input
US7672822B2 (en) Automated three-dimensional alternative position viewer
GB2555698B (en) Three-dimensional model manipulation and rendering
Trescak et al. A shape grammar interpreter for rectilinear forms
US20110050691A1 (en) Real-time user guided optimization of general 3d data
Beccari et al. A fast interactive reverse-engineering system
Gao et al. An approach to solid modeling in a semi-immersive virtual environment
CN113221857B (en) Model deformation method and device based on sketch interaction
Wang et al. Multiscale vector volumes
GB2540791A (en) Apparatus, methods, computer programs and non-transitory computer-readable storage media for generating a three-dimensional model of an object
CN113901367A (en) BIM (building information modeling) mass model display method based on WebGL + VR (WebGL + VR)
JP2010262637A (en) Method, program and product edition system for visualizing object displayed on computer screen
Garcia-Cantero et al. Neurotessmesh: a tool for the generation and visualization of neuron meshes and adaptive on-the-fly refinement
Popov et al. Efficient contouring of functionally represented objects for additive manufacturing
Morigi et al. Reconstructing surfaces from sketched 3d irregular curve networks
Leimer et al. Relation-based parametrization and exploration of shape collections
Hristov et al. Approach for mesh optimization and 3d web visualization
Cuno et al. 3D free-form modeling with variational surfaces
Kratt et al. Non-realistic 3D object stylization
Horešovský Visualization of the difference between two triangle meshes
Pitzalis et al. Working with Volumetric Meshes in a Game Engine: a Unity Prototype.
Isenberg et al. 3D illustrative effects for animating line drawings

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)