WO2021068061A1 - System and method for generating 3d models from specification documents - Google Patents

System and method for generating 3d models from specification documents Download PDF

Info

Publication number
WO2021068061A1
WO2021068061A1 PCT/CA2020/051337 CA2020051337W WO2021068061A1 WO 2021068061 A1 WO2021068061 A1 WO 2021068061A1 CA 2020051337 W CA2020051337 W CA 2020051337W WO 2021068061 A1 WO2021068061 A1 WO 2021068061A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
machine learning
building information
mesh
extraction module
Prior art date
Application number
PCT/CA2020/051337
Other languages
French (fr)
Inventor
Boyang Liu
Tingcheng CUI
Nanyi Jiang
Yeqi SANG
Original Assignee
Orbiseed Technology Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbiseed Technology Inc. filed Critical Orbiseed Technology Inc.
Publication of WO2021068061A1 publication Critical patent/WO2021068061A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A system and method for automating the production of 3-dimensional building information modeling files is disclosed. The system comprises a plurality of data sources; a computer processor for executing an artificial intelligence engine stored in a computer readable memory, the artificial intelligence engine comprising: a machine learning extraction module for extracting relevant data from the plurality of data sources, and a machine learning generation module for extruding a 3-dimensional model from the plurality of data sources and generating a building information modeling file from the extracted relevant data; and at least one database for storing the building information modeling file.

Description

SYSTEM AND METHOD FOR GENERATING 3D MODELS FROM SPECIFICATION
DOCUMENTS
Field
The present invention relates to business information modelling. Specifically, the present invention relates to generating 3-dimensional models from design specification documents.
Background
3D modeling has been adopted in many industries such as video games, film, engineering, product design, animation, data visualization and much more. 3D modeling often provides an intuitive way for people to understand designs, which may lead to higher sales conversions, smoother project workflows, and reduced conflict due to design misinterpretation. More specifically, a special kind of 3D modeling called building information modeling (BIM) attaches additional data to the 3D models, which also acts as a centralized database that are shared throughout the engineering process.
However, creating the 3D models is labour intensive. Currently, 3D modeling is a mostly manual process that requires an individual to look at reference files (floorplan, design guidelines, process diagrams, 3D scans etc.) and manually create the 3D model and enter the associated data into BIM software (such as Autodesk Revit™). This process requires very specialized skillsets to create and represents a bottleneck for many engineering firms that limits their project throughput.
BIM requires a completely different workflow than the traditional engineering and design process that focuses on 2D designs. This means that all levels of the design and engineering process would require training for new software and methods and procedures. The cost of this implementation is very high and often offsets the benefit 3D brings, which reduces the return on investment by adopting 3D.
The discussion of the background herein is included solely to explain the context of the inventions described herein. This is not to be taken as an admission that any of the material referred to was published, known, or part of the common general knowledge as of the priority date of any of the claims.
Summary
In accordance with an aspect, there is provided a system for automating the production of 3-dimensional building information modeling files comprising: a plurality of data sources; a computer processor for executing an artificial intelligence engine stored in a computer readable memory, the artificial intelligence engine comprising: a machine learning extraction module for extracting relevant data from the plurality of data sources, and a machine learning generation module for extruding a 3-dimensional model from the plurality of data sources and generating a building information modeling file from the extracted relevant data; and at least one database for storing the building information modeling file.
In accordance with another aspect, there is provided a method for automating the production of 3-dimensional building information modeling files utilizing an artificial intelligence engine, comprising steps of: acquiring data from a plurality of data sources using a machine learning extraction module in the artificial intelligence engine; extruding a 3- dimensional model from the acquired data using a machine learning generation module in the artificial intelligence engine; and generating, using the artificial intelligence engine, a building information modeling file from the extracted relevant data.
It is understood that one or more of the aspects described herein (and above) may be combined in any suitable manner. The novel features of the present invention will become apparent to those of skill in the art upon examination of the following detailed description of the invention. It should be understood, however, that the detailed description of the invention and the specific examples presented, while indicating certain aspects of the present invention, are provided for illustration purposes only because various changes and modifications within the spirit and scope of the invention will become apparent to those of skill in the art from the detailed description of the invention and claims that follow.
Brief Description of the Drawings
The present invention will be further understood from the following description with reference to the Figures, in which:
Figure 1 shows system diagram of the Al for producing 3D BIM files.
Figure 2 shows a flow chart depicting the process followed by the system of Figure 1.
Figure 3 shows a flow chart depicting an example process followed by the system of Figure 1.
Figure 4 shows a 2-dimensional floor plan used by the process of Figure 3 to create a 3D BIM file.
Figure 5 shows the 2D floorplan of Figure 4 as it is being masked during the process of Figure 3.
Figure 6 shows an exemplary portion from the 2D floorplan of Figure 4 where contours have been determined according to the process of Figure 3.
Figure 7 shows an exemplary portion from the 2D floorplan of Figure 4, where an enclosed space has been determined according to the process of Figure 3.
Figure 8 shows a generated mesh for the 2D floorplan of Figure 4. Figure 9 shows an exemplary extruded 3D model from the 2D floorplan of Figure 4.
Figure 10 shows is the 3D model of Figure 9 populated with props.
Figure 11 shows an alternate embodiment of the process of Figure 2.
Figure 12 shows an alternate embodiment of the process of Figure 2.
Detailed Description of Certain Aspects Definitions
As used herein, Artificial Intelligence (Al) refers to artificially created technology capable of adapting itself to solve problems.
As used herein, a Machine Learning (ML) refers to a subset of Al, which uses data to train a computer algorithm to continually and automatically improve itself through experience.
As used herein, neural network refers to a type of ML that replicates the human brain in learning concepts similar to human learning.
As used herein, Computer Aided Design (CAD) refers to a software that creates design and engineering documents and drawings in 2-dimensions (2D) or 3-dimensions (3D).
As used herein, 3D modeling refers to three-dimensional (3D) representation of data characterizing a real world object, allowing a user to readily view the object from different angles.
As used herein, Building Information Modeling (BIM) refers to a special type of 3D modeling where additional data beyond the visual representation, such as building materials, are attached to each model.
As used herein, Procedural Generation refers to the automated creation of 3D models using a set of parameters.
As used herein, contour refers to an array of 2D points that encloses an area to define a shape.
As used herein, Natural Language Processing (NLP) refers to a field in machine learning with the ability of a computer to understand, analyze, manipulate, and potentially generate human language.
As used herein, point cloud refers to a collection of data points defined by a given coordinates system. In a 3D coordinates system, for example, a point cloud may define the shape of some real or created physical system. Point clouds are used to create 3D meshes and other models used in 3D modeling for various fields including medical imaging, architecture, 3D printing, manufacturing, 3D gaming and various virtual reality (VR) applications. System Architecture
Turning now to Figure 1 , an exemplary system for automatically producing 3D models from design specification documents is shown and generally referenced by the number 100. The system 100 represents an artificial intelligence (Al) engine 110 that utilizes machine learning (ML) algorithms to automate the creation of building information modeling models. In the embodiment shown in Figure 1 , the system 100 comprises data sources 102, from which the Al engine 110 extracts relevant data via ML extraction algorithm 112, which is used to generate 3D BIM models or 3D BIM files 118 via ML generation algorithm 114.
The Al engine 100 may reside in memory (not shown) on a server or computer (not shown) and the ML extraction algorithm 112 and ML generation algorithm 114 are programming modules residing in and retrievable from the memory. Data sources 102 may include, but are not limited to, past project data 104, designs specification documents 106, and additional input 108. Past project data 104 may include, but is not limited to, solutions to design problems that have been solved in the past. Design specification documents 106 may include, but are not limited to, reference files used by engineers, such as floorplans, flow diagrams, building codes, 3D scans, design sketches, and blueprints. Additional input 108, may include but is not limited to, data that is unique to the current project or problem that will not be repeated by other projects, and therefore is not necessary for training the Al engine 110. 3D BIM files 118, extracted relevant data, building codes, and past project data, may be saved to a database or a plurality of databases.
Process
Figure 2 shows an exemplary process for producing 3D BIM files, which is referenced by the number 120. In step 122, an engineering project is initiated for which a 3D BIM file is desired. Such engineering projects may include, but are not limited to, construction of residential buildings, commercial buildings, industrial buildings, or institutional buildings. In step 124, the design scope of the engineering project is determined. This is usually a collaborative effort between a client and the engineering project team or engineering consultants. From this determination, the design specification documents 106 are created in step 126. In step 128, the design specification documents 106 are input into the Al engine 110, where the ML extraction algorithm 112 extracts the relevant data from the design specification documents 106 and other data sources 102.
The ML extraction algorithm 112 is used to extract data from the data sources 102 such as past projects 104 design specification documents 106, and additional inputs 108. In an example to illustrate the extraction method, for floorplans, blueprints and design diagrams, the ML extraction algorithm 112 may employ computer vision such as convolution neural network. In another example, to extract relevant data from such data sources as text based documents, such as building codes or design specifications, natural language processing would be employed by the ML extraction algorithm 112.
In step 130, the ML generation algorithm 114 utilizes the extracted data to produce a 3D model. In step 132, the ML generation algorithm 114 determines which type of file is needed for the project, based on user input. If needed, then in step 134, a 3D CAD file is generated. The 3D CAD file may be saved to a database. If needed, then in step 136, a 2D CAD file is generated and is saved to the database. If needed, then in step 138 a 3D BIM file 118 is generated and saved to the database. In step 140, appropriate engineering documents are produced from the BIM file 118. Engineering documents may include, but are not limited to, bills of materials, specification sheets, calculations of specific areas or dimensions, etc.
The ML generation algorithm 114 creates the 3D BIM file 118 based on the type of input and specific use case. For example, data for wall thickness, height, and material pulled from the extracted data may automatically generate the BIM file 118 for a wall and may be appended to the model. In this example, the wall thickness may be extracted by the ML extraction algorithm 112 from a floorplan. The wall height may be extracted by the ML extraction algorithm 112 from building codes. The wall material may be extracted by the ML extraction algorithm 112 from user choices or from a photo or other data source.
Example
Figure 3 shows a flow chart detailing an example where the system and method of Figures 1 and 2 respectively are utilized. This example is merely an illustration of one way to utilize the system and method and is not meant to be limiting in any way. The example method 140 details using a 2D floorplan 170 shown in Figure 4 to create a 3D BIM model file 118.
In step 142, data is acquired from the data sources 102. In step 144, the extraction algorithm 112 determines if conversion of the data sources is necessary. For example if the data source is a CAD file, conversion to bitmap format would be performed as a standardized format for the Al engine 110, however other format standardizations are possible. If conversion is needed, then in step 146, the file is converted to a bitmap file. In step 148, the ML extraction algorithm 112 extracts relevant data from the bitmap file. In step 150, the ML extraction algorithm 112 masks the floorplan using an image segmentation technique.
Figure 5 shows the 2D floorplan from Figure 4 as it is being masked. In some embodiments, the masked floorplan 180 will be masked using color coding. Specifically, certain colors may represent certain structural elements. For instance, black may represent walls 182, blue may represent windows 184, red may represent doors 186, and green may represent rooms 188. As the Al engine 110 is initially used, masking is done manually to train it to use image segmentation to accurately mask the floorplans. This manual masking may be done in image editing software such as Photoshop™. Each masked image 180 is kept in a separate image layer. Separate layers allow for ease of processing and for masks to overlap each other. For example, all walls may be in one or more layers, however, only walls will be shown in these layers. A separate layer is used for openings such as windows and doors. This layer only contains windows and doors. Any icons for furniture or equipment would be in their own separate layer. Any layer may overlap another layer. For example, the layer having the walls may overlap the layer containing the walls and doors or the layer containing the floor.
Turning back to Figure 3, in step 152, the contours of each mask are identified and defined. A contour is an array of 2D points that enclose an area to define a shape. In one embodiment, dilation techniques are used to generate contours as shown to form the mask. The ML extraction algorithm 112 interprets each pixel of the 2D floorplan 170 to determine its relationship to the nearby pixels. For example, if a pixel is black, the ML extraction algorithm determines which pixels nearby are also black. Based on their distance from each other, a certain threshold determines if they share the same contour. This threshold is determined by trial and error, by the user, but may be determined by the Al engine through machine learning. In Figure 6, an exemplary portion 190 from the 2D floorplan is shown where the contours 192, 194, 196, 198 have been determined. The ML extraction algorithm 112 determines if contours 192, 194, 196, 198 are related to each other, “depth first search” (DFS) is performed to determine shapes that the contours are depicting, such as rooms. In this example, contours 192 and 194 are determined to be the same contour whereas contours 196 and 198 are determined to be the same contour. It is also determined that contours 196 and 198 are placed within the contours 192 and 194, thus defining an opening 199 which is enclosed by the contours 192, 194, 196, 198.
In Figure 7, an exemplary portion 200 from the 2D floorplan is shown, where the contours 202, 204 have been determined to enclose a space 206. Within space 206 formed by the contours 202, 204, the largest circle 208 which touches the inside contour 204 is drawn. If the radius of this circle 208 is below a certain threshold, anything within the space 206 may be discarded, meaning that it is ignored for the purposes of masking. This threshold is determined by trial and error, by the user, but may be determined by the Al engine through machine learning.
Returning to Figure 3, in step 154, the ML generation algorithm 114 uses the identified contours to create a single layer mesh. Figure 8 shows a generated mesh 210 for the 2D floorplan. The contours from Figures 6 and 7 are used to create a single layer mesh 210. The generated mesh 210 comprises a set of triangles that are connected by their common edges or corners.
In Figure 3, in step 156, the generated mesh is optimized for topology and vertices placement. Techniques such as those demonstrated in US20180330480 may be used to optimize the generated mesh. A mesh optimization algorithm, such as edge collapse or level-of-detail optimization may be used to optimize the generated mesh. The generated mesh 210 is rescaled based on the scale of the initial extraction from the specification documents 106. If no scale indication is found or detected, an estimated scale will be based on door size which can be found in building code documents.
In step 158, retrieves the building codes and other user inputs are retrieved.
Building codes may be retrieved from a database containing current building codes either manually, via hardcoding in the software or by extraction via Natural Language Processing (NLP) which will be described in more detail below. In step 160, the bitmaps of the optimized mesh are converted to vectors. In step 162, the ML generation algorithm 114 extrudes a 3D model from the optimized mesh 210, applying the retrieved building codes and other user inputs to the model. Figure 9 shows an exemplary extruded 3D model 220. The extrusion is based on the wall height input in the design specification documents 106, which may specify different heights for different walls in a height map. If no height or height map is given in the design specification documents 106, then the ML extrusion algorithm will apply standard wall heights from the retrieved building codes.
After extrusion, the ML generation algorithm 114 may apply texture mapping. When applying texture mapping the Al engine 110 unwraps all texture coordinates (or UV coordinates). Typically, when a model is created as a polygon mesh using texture coordinates may be generated for each vertex in the mesh. Materials and textures may then be applied to walls, windows, doors, based on user choices, design specification documents 106 and other settings. Applied materials and textures may be manually checked to ensure they are applied correctly.
In an embodiment, the system may also comprise an ML image recognition algorithm to identify automatically any known icons within the 2D floorplan 170. These known icons may include, but are not limited to, typical icons used for furniture, equipment and other props used in floorplans. For each icon that is identified and located, the icon would be pre- processed using a template image. Pre-processing uses image detection to look for similar images to known images, i.e. the template image. Once the icons are pre-processed, props may be placed automatically in the model corresponding with the locations in which they were identified, as shown in Figure 10. Figure 10 is the 3D model 230 of the floorplan populated with props such as chairs 232, a meeting room table 234, and work stations 236, which are identified for the purposes of illustration only. It is understood that several props are shown in the diagram and may be substituted for other props. The system 100 may also have functionality to manually add any props that were not placed during the automated step or if a user decides to make any changes to props.
In some embodiments, the ML image recognition algorithm may be able to link text with an icon that appear near each other in floorplans by evaluating if the text is a good fit for the icon. The ML image recognition algorithm may be trained for this type of recognition and evaluation. For example, a known symbol for a fire alarm may appear on the floorplan with "31 meters" and "FA-1234" nearby. The ML image recognition algorithm may be trained manually to recognize that FA-1234 is fire alarm identification, whereas "31 meters" has nothing to do with fire alarms.
Turning back to Figure 3, in step 164, the BIM file 118 is created from the 3D model. As in Figure 2, the BIM file 118 may be used to automate the creation of appropriate engineering documents. The creation of appropriate engineering documents may be done by other program modules or by third party plug-ins. Engineering documents may include, but are not limited to, bills of materials, specification sheets, calculations of specific areas or dimensions, etc. The method 140 of Figure 3 is generally faster and more accurate than traditional method of creating BIM files.
Figure 11 shows an alternative embodiment of the invention and is generally referenced by the number 300. In this embodiment, rather than training the Al engine with enough data to accurately identify shapes from documents, this method only requires the extraction of simple geometric data as input.
In step 302, a 3D point cloud is input into the neural network. A point cloud is a collection of data points defined by a given coordinates system. In a 3D coordinates system, for example, a point cloud may define the shape of some real or created physical system. Point clouds are used to create 3D meshes and other models used in 3D modeling for various fields including medical imaging, architecture, 3D printing, manufacturing, 3D gaming and various virtual reality (VR) applications. The neural network acts similarly to the ML extraction engine 112 of Figure 1 , but is specially trained to understand 3D point cloud data.
In step 304, the neural network determines if the point cloud data requires conversion. If the point cloud data requires conversion, then in step 306, the 3D points are converted into 2D images or voxels.
In step 308, the neural network labels the data. This is similar to the process of Figure 5, where a 2D floorplan is masked.
In step 310, the labelled data is converted into 3D vector points by the neural network. In step 312, building codes and other user inputs are retrieved. Building codes may be retrieved from a database containing current building codes. In step 314, a 3D model is extruded from the 3D vector points by applying building codes and other user inputs. In step 316, a BIM file is created from the extruded 3D model. As in Figure 2, the BIM file 118 may be used to automate the creation of appropriate engineering documents. Engineering documents may include, but are not limited to, bills of materials, specification sheets, calculations of specific areas or dimensions, etc.
Figure 12 shows another alternative embodiment of the invention and is generally referenced by the number 400. In this embodiment, rather than inputting from the typical data sources, raw design code data is input in step 402. Raw code data may include text documents which are manually entered, or scanned in using optical character recognition (OCR) techniques. In step 404, it is determined if the raw code data requires conversion. If the data requires conversion, then in step 406, the data is digitized or OCR is used to convert the data to a proper format such as json or xml. In step 408, natural language algorithms or natural language processes (NLP) extract information in a similar manner to the ML extraction algorithm 112 of Figure 1. In step 410, parameters for building 3D models are input. These parameters may be input manually by a user or extracted by an extraction algorithm such as the ML extraction algorithm 112 described above. In step 412, other design data is retrieved from the database for any missed parameters in other steps. In step 414, a 3D model is extruded from the optimized mesh, (created similarly to that in Figure 3) by applying building codes and other user inputs. In step 416, a BIM file is created from the extruded 3D model. As in Figure 2, the BIM file 118 may be used to automate the creation of appropriate engineering documents. Engineering documents may include, but are not limited to, bills of materials, specification sheets, calculations of specific areas or dimensions, etc.
Unless otherwise explained, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. Although any methods and materials similar or equivalent to those described herein can be used in the practice fortesting of the present invention, the typical materials and methods are described herein. In describing and claiming the present invention, the following terminology will be used.
It is also to be understood that the terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting. Patent applications, patents, and publications are cited herein to assist in understanding the aspects described. All such references cited herein are incorporated herein by reference in their entirety and for all purposes to the same extent as if each individual publication or patent or patent application was specifically and individually indicated to be incorporated by reference in its entirety for all purposes. To the extent publications and patents or patent applications incorporated by reference contradict the disclosure contained in the specification, the specification is intended to supersede and/or take precedence over any such contradictory material.
In understanding the scope of the present application, the articles “a”, “an”, “the”, and “said” are intended to mean that there are one or more of the elements. Additionally, the term “comprising” and its derivatives, as used herein, are intended to be open ended terms that specify the presence of the stated features, elements, components, groups, integers, and/or steps, but do not exclude the presence of other unstated features, elements, components, groups, integers and/or steps. The foregoing also applies to words having similar meanings such as the terms, “including”, “having” and their derivatives.
It will be understood that any aspects described as “comprising” certain components may also “consist of or “consist essentially of,” wherein “consisting of has a closed-ended or restrictive meaning and “consisting essentially of” means including the components specified but excluding other components except for materials present as impurities, unavoidable materials present as a result of processes used to provide the components, and components added for a purpose other than achieving the technical effect of the invention.
It will be understood that any component defined herein as being included may be explicitly excluded from the claimed invention by way of proviso or negative limitation.
In addition, all ranges given herein include the end of the ranges and also any intermediate range points, whether explicitly stated or not.
Terms of degree such as “substantially”, “about” and “approximately” as used herein mean a reasonable amount of deviation of the modified term such that the end result is not significantly changed. These terms of degree should be construed as including a deviation of at least ±5% of the modified term if this deviation would not negate the meaning of the word it modifies.
The abbreviation, “e.g.” is derived from the Latin exempli gratia, and is used herein to indicate a non-limiting example. Thus, the abbreviation “e.g.” is synonymous with the term “for example.” The word “or” is intended to include “and” unless the context clearly indicates otherwise.

Claims

WHAT IS CLAIMED IS:
1. A system for automating the production of 3-dimensional building information modeling files comprising: a plurality of data sources; a computer processor for executing an artificial intelligence engine stored in a computer readable memory, the artificial intelligence engine comprising: a machine learning extraction module for extracting relevant data from the plurality of data sources, and a machine learning generation module for extruding a 3-dimensional model from the plurality of data sources and generating a building information modeling file from the extracted relevant data; and at least one database for storing the building information modeling file.
2. The system of claim 1 , wherein the building information modeling file is utilized in creating engineering documents.
3. The system of claim 2, wherein the engineering documents may be any one of bills of materials, specification sheets, calculations of specific areas and dimensions.
4. The system of claim 1 , wherein the data sources may be any one of past project data, designs specification documents, and user input.
5. The system of claim 1 , wherein the machine learning extraction module converts the relevant data to a bitmap format if conversion of the relevant data is required.
6. The system of claim 1 , wherein the machine learning extraction module extracts data using computer vision techniques.
7. The system of claim 1 , wherein the machine learning extraction module extracts data using natural language processing.
8. The system of claim 1 , wherein the machine learning extraction module masks the relevant data before extruding the 3-dimensional model to identify shapes and contours.
9. The system of claim 8, wherein the identified shapes and contours are used to create a mesh.
10. The system of claim 9, wherein the machine learning generation module further comprises a mesh optimizing module for optimizing the mesh.
11. The system of claim 10, wherein the 3-dimensional model is extruded from the optimized mesh.
12. A method for automating the production of 3-dimensional building information modeling files utilizing an artificial intelligence engine, comprising steps of: acquiring data from a plurality of data sources using a machine learning extraction module in the artificial intelligence engine; extruding a 3-dimensional model from the acquired data using a machine learning generation module in the artificial intelligence engine; and generating, using the artificial intelligence engine, a building information modeling file from the extracted relevant data.
13. The method of claim 12 further comprising creating engineering documents utilizing the building information modeling file.
14. The method of claim 13, wherein the engineering documents may be any one of bills of materials, specification sheets, calculations of specific areas and dimensions.
15. The method of claim 12, wherein the data sources may be any one of past project data, designs specification documents, and user input.
16. The method of claim 12, wherein after acquiring the data, the machine learning extraction module converts the relevant data to a bitmap format if conversion of the relevant data is required.
17. The method of claim 12, wherein the machine learning extraction module extracts data using computer vision techniques.
18. The method of claim 12, wherein the machine learning extraction module extracts data using natural language processing.
19. The method of claim 12, wherein before the 3-dimensional model is extruded, the machine learning extraction module masks the relevant data to identify shapes and contours.
20. The method of claim 19, wherein the identified shapes and contours are used to create a mesh.
21. The method of claim 20, wherein the machine learning generation module further comprises a mesh optimizing module for optimizing the mesh.
22. The method of claim 21 , wherein 3-dimensional model is extruded from the optimized mesh.
23. The method of claim 12, wherein the generated building information model file is stored to at least one database.
PCT/CA2020/051337 2019-10-07 2020-10-06 System and method for generating 3d models from specification documents WO2021068061A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962911753P 2019-10-07 2019-10-07
US62/911,753 2019-10-07

Publications (1)

Publication Number Publication Date
WO2021068061A1 true WO2021068061A1 (en) 2021-04-15

Family

ID=75436738

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2020/051337 WO2021068061A1 (en) 2019-10-07 2020-10-06 System and method for generating 3d models from specification documents

Country Status (2)

Country Link
CN (1) CN112700529A (en)
WO (1) WO2021068061A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023275833A1 (en) * 2021-07-02 2023-01-05 Ganesh Balasubramanian Automatic conversion of 2d schematics to 3d models
WO2023279186A1 (en) * 2021-07-06 2023-01-12 Orbiseed Technology Inc. Methods and systems for extracting text and symbols from documents
US20230109144A1 (en) * 2021-09-03 2023-04-06 Tata Consultancy Services Limited Systems and methods for extracting, digitizing, and using engineering drawing data
IT202200002258A1 (en) 2022-02-08 2023-08-08 Stefano Revel METHOD OF DETECTION OF PHYSICAL BODY MEASUREMENTS USING PHOTOGRAMMETRY
CN117236341A (en) * 2023-09-21 2023-12-15 东方经纬项目管理有限公司 Whole process engineering consultation integrated system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180330480A1 (en) * 2017-05-10 2018-11-15 Babylon VR Inc. System and methods for generating an optimized 3d model

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10297074B2 (en) * 2017-07-18 2019-05-21 Fuscoe Engineering, Inc. Three-dimensional modeling from optical capture
CN109711099B (en) * 2019-01-23 2022-10-04 河南省交通规划设计研究院股份有限公司 BIM automatic modeling system based on image recognition machine learning
CN110134724A (en) * 2019-05-15 2019-08-16 清华大学 A kind of the data intelligence extraction and display system and method for Building Information Model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180330480A1 (en) * 2017-05-10 2018-11-15 Babylon VR Inc. System and methods for generating an optimized 3d model

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
DODGE, S. ET AL.: "Parsing floor plan images", 2017 FIFTEENTH IAPR INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA), 8 May 2017 (2017-05-08), Nagoya, Japan, pages 358 - 361, XP033126601, Retrieved from the Internet <URL:https://www.researchgate.net/publication/318582079> [retrieved on 20201218], DOI: 10.23919/MVA.2017.7986875 *
JANSSEN PATRICK, CHEN KIAN WEE, MOHANTY AKSHATA: "Automated Generation of BIM Models", BIM | CONCEPTS, vol. 2, 2016, pages 583 - 590, XP055817168, Retrieved from the Internet <URL:http://papers.cumincad.org/data/works/att/ecaade2016_239.pdf> [retrieved on 20201218] *
JOONAS HELMINEN: "Automated Generation of Steel Connections of BIM by Machine Learning", MASTER OF SCIENCE THESIS, 25 June 2019 (2019-06-25), Finland, pages 1 - 59, XP055817149, Retrieved from the Internet <URL:https://trepo.tuni.fi/bitstream/handle/10024/115724/Helminen.pdf?sequence=2&isAllowed=y> [retrieved on 20201218] *
LIM JOIE, JANSSEN PATRICK, STOUFFS RUDI: "Automated Generation of BIM Models From 2D CAD Drawings", PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE OF THE ASSOCIATION FOR COMPUTER-AIDED ARCHITECTURAL DESIGN RESEARCH IN ASIA (CAADRIA, vol. 2, May 2018 (2018-05-01), Beijing, China, pages 61 - 70, XP055817166, Retrieved from the Internet <URL:https://www.researchgate.net/publication/325300333> [retrieved on 20201218] *
MATTHEW CHANDLER: "3D Interior Model Extrusion from 2D Floor Plans", MASTER OF COMPUTER SCIENCE PROJECT, 18 April 2012 (2012-04-18), USA, pages 1 - 34, XP055817151, Retrieved from the Internet <URL:https://www.cs.uaf.edu/media/filer_public/fd/4e/fd4eb9f0-7b66-4c4b-bb87-309580cf52d7/ms_cs_matt_chandler.pdf> [retrieved on 20201218] *
MICHAEL LAWRENCE, RACHEL POTTINGER, SHERYL STAUB-FRENCH, MADHAV PRASAD NEPAL: "Creating flexible mappings between Building Information Models and cost information", AUTOMATION IN CONSTRUCTION, vol. 45, September 2014 (2014-09-01), pages 107 - 118, XP055817122 *
SACKS RAFAEL, MA LING, YOSEF RAZ, BORRMANN ANDRE, DAUM SIMON, KATTEL URI: "Semantic Enrichment for Building Information Modeling: Procedure for Compiling Inference Rules and Operators for Complex Geometry", vol. 31, no. 6, 28 August 2017 (2017-08-28), pages 1 - 12, XP055817139, Retrieved from the Internet <URL:https://ascelibrary.org/doi/pdf/10.1061/(ASCE)CP.1943-5487.0000705> [retrieved on 20201218] *
SANDELIN, FREDRIK: "Semantic and Instance Segmentation of Room Features in Floor Plans using Mask R-CNN", UPTEC IT 19011, EXAMENSARBETE 30 HP, 30 August 2019 (2019-08-30), Sweden, pages ii-iv, 1 - 48, XP055817124 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023275833A1 (en) * 2021-07-02 2023-01-05 Ganesh Balasubramanian Automatic conversion of 2d schematics to 3d models
WO2023279186A1 (en) * 2021-07-06 2023-01-12 Orbiseed Technology Inc. Methods and systems for extracting text and symbols from documents
US20230109144A1 (en) * 2021-09-03 2023-04-06 Tata Consultancy Services Limited Systems and methods for extracting, digitizing, and using engineering drawing data
US11899704B2 (en) * 2021-09-03 2024-02-13 Tata Consultancy Services Limited Systems and methods for extracting, digitizing, and using engineering drawing data
IT202200002258A1 (en) 2022-02-08 2023-08-08 Stefano Revel METHOD OF DETECTION OF PHYSICAL BODY MEASUREMENTS USING PHOTOGRAMMETRY
CN117236341A (en) * 2023-09-21 2023-12-15 东方经纬项目管理有限公司 Whole process engineering consultation integrated system

Also Published As

Publication number Publication date
CN112700529A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
WO2021068061A1 (en) System and method for generating 3d models from specification documents
US11636234B2 (en) Generating 3D models representing buildings
US11043026B1 (en) Systems and methods for processing 2D/3D data for structures of interest in a scene and wireframes generated therefrom
Bassier et al. Unsupervised reconstruction of Building Information Modeling wall objects from point cloud data
US20190243928A1 (en) Semantic segmentation of 2d floor plans with a pixel-wise classifier
CN105006016B (en) A kind of component-level 3 D model construction method of Bayesian network constraint
CN108399649A (en) A kind of single picture three-dimensional facial reconstruction method based on cascade Recurrent networks
JP7294788B2 (en) Classification of 2D images according to the type of 3D placement
Pantoja-Rosero et al. Generating LOD3 building models from structure-from-motion and semantic segmentation
Ishikawa et al. Semantic segmentation of 3D point cloud to virtually manipulate real living space
CN110363804B (en) Flower bas-relief generating method based on deformation model
Xu et al. Three-dimensional object detection with deep neural networks for automatic as-built reconstruction
Demir et al. Guided proceduralization: Optimizing geometry processing and grammar extraction for architectural models
CN114399784A (en) Automatic identification method and device based on CAD drawing
Parente et al. Integration of convolutional and adversarial networks into building design: A review
Gruen et al. Semantically enriched high resolution LoD 3 building model generation
Gavrilov et al. A method for aircraft labeling in aerial and satellite images based on continuous morphological models
CN110706347A (en) Implementation method for creating 3D building model through wire frame diagram of building
Divya Udayan et al. Animage-based approach to the reconstruction of ancient architectures by extracting and arranging 3D spatial components
Mahmoud et al. Automated BIM generation for large-scale indoor complex environments based on deep learning
Bassier et al. BIM reconstruction: Automated procedural modeling from point cloud data
CN112052489B (en) Method and system for generating house type graph
Kratt et al. Sketching in gestalt space: Interactive shape abstraction through perceptual reasoning
Kelly et al. VisionGPT-3D: A Generalized Multimodal Agent for Enhanced 3D Vision Understanding
Ochmann Automatic reconstruction of parametric, volumetric building models from 3D point clouds

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20874399

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20874399

Country of ref document: EP

Kind code of ref document: A1