WO2023283377A1 - Methods, storage media, and systems for augmenting data or models - Google Patents
Methods, storage media, and systems for augmenting data or models Download PDFInfo
- Publication number
- WO2023283377A1 WO2023283377A1 PCT/US2022/036416 US2022036416W WO2023283377A1 WO 2023283377 A1 WO2023283377 A1 WO 2023283377A1 US 2022036416 W US2022036416 W US 2022036416W WO 2023283377 A1 WO2023283377 A1 WO 2023283377A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- elements
- images
- computer
- outline
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 195
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 123
- 238000003860 storage Methods 0.000 title claims abstract description 103
- 230000000875 corresponding effect Effects 0.000 claims description 88
- 230000000007 visual effect Effects 0.000 claims description 22
- 238000002372 labelling Methods 0.000 claims description 17
- 238000013519 translation Methods 0.000 claims description 10
- 230000002596 correlated effect Effects 0.000 claims description 9
- 230000000916 dilatatory effect Effects 0.000 claims description 5
- 230000001052 transient effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 description 27
- 238000004891 communication Methods 0.000 description 19
- 238000012545 processing Methods 0.000 description 17
- 238000013442 quality metrics Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 11
- 230000003416 augmentation Effects 0.000 description 7
- 238000013481 data capture Methods 0.000 description 6
- 238000009795 derivation Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000011960 computer-aided design Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004873 anchoring Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/04—Architectural design, interior design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
Definitions
- the present disclosure relates to methods, storage media, and systems for augmenting two- dimensional and/or three-dimensional data or models.
- Data such as two-dimensional (2D) data (e.g., visual data), three-dimensional (3D) data (e.g., depth data), or both
- 2D data e.g., visual data
- 3D data e.g., depth data
- Models such as 2D models (e.g., digital representations in 2D space), 3D models (e.g., digital representations in 3D space), or both
- Different data capture techniques and reconstruction techniques can result in varying degrees of inaccuracies in the data
- different modeling techniques can result in varying degrees of inaccuracies in the models.
- inaccuracies in the data can propagate and result in inaccuracies in the models. While data capture or scanning techniques, reconstruction techniques, and modeling techniques continue to improve, these various techniques can result in inaccuracies which limit the scope of any one data set or model.
- Described herein are various methods, storage media, and systems for augmenting data such as two-dimensional (2D) data (e.g., visual data), three-dimensional data (3D) data (e.g., depth data), or both, models, such as 2D models (e.g., digital representations in 2D space), 3D models (e.g., 3D representations in 3D space), or both.
- Augmenting one set of data or one model with another set of data or another model can address issues related to different capture techniques, reconstruction techniques, and modeling techniques. Augmenting one set of data or one model with another set of data or another model can be used to revise, refine, or complete, the data or the models. In some embodiments, one set of data or one model can be leveraged to improve another set of data or another model.
- One aspect of the present disclosure relates to a method for augmenting 3D models.
- the method may include receiving a first plurality of images.
- the method may include generating a first 3D model based on the first plurality of images.
- the method may include receiving a second plurality of images.
- the method may include generating a second 3D model based on the second plurality of images.
- the method may include augmenting the first 3D model with the second 3D model.
- Another aspect of the present disclosure relates to a non-transient computer-readable storage medium having instructions embodied thereon, the instructions being executable by one or more processors to perform a method for augmenting 3D models.
- the method may include receiving a first plurality of images.
- the method may include generating a first 3D model based on the first plurality of images.
- the method may include receiving a second plurality of images.
- the method may include generating a second 3D model based on the second plurality of images.
- the method may include augmenting the first 3D model with the second 3D model.
- the system may include one or more hardware processors configured by machine- readable instructions.
- the processor(s) may be configured to receive a first plurality of images.
- the processor(s) may be configured to generate a first 3D model based on the first plurality of images.
- the processor(s) may be configured to receive a second plurality of images.
- the processor(s) may be configured to generate a second 3D model based on the second plurality of images.
- the processor(s) may be configured to augment the first 3D model with the second 3D model.
- FIG. 1 illustrates 3D data of an interior environment, according to some embodiments.
- FIGS. 2A-2D illustrate 3D data of an interior environment, according to some embodiments.
- FIG. 3 A illustrates 3D data of an interior environment, according to some embodiments.
- FIG. 3B illustrates visual data of a portion of the interior environment of FIG. 3A, according to some embodiments.
- FIG. 4A illustrates a top down view of a capture (e.g., scan) subprocess of a 3D reconstruction process of an exterior, according to some embodiments.
- a capture e.g., scan
- FIGS. 4B-4E illustrate images captured by a capture device at poses illustrated in FIG. 4A, according to some embodiments.
- FIG. 4F illustrates a front-left view of a model, according to some embodiments.
- FIG. 4G illustrates a back-right view of a model, according to some embodiments.
- FIG. 5A illustrates interior 3D data augmented with an exterior 3D model, according to some embodiments.
- FIG. 5B illustrates a magnified view of a portion of FIG. 5A, according to some embodiments.
- FIG. 6A illustrates a top-down view of a floorplan representation generated based on 3D data, according to some embodiments.
- FIG. 6B illustrates a top-down view of a floorplan representation and augmented 3D data, according to some embodiments.
- FIG. 6C illustrates a perspective view of a floorplan representation and augmented 3D data, according to some embodiments.
- FIG. 7A illustrates a floorplan with a capture path, according to some embodiments.
- FIG. 7B illustrates a floorplan with a capture path, according to some embodiments.
- FIG. 8 illustrates a block diagram of a computer system that may be used to implement the techniques described herein, according to some embodiments.
- FIG. 9 illustrates a system configured for augmenting 3D models, according to some embodiments.
- FIG. 10 illustrates a method for augmenting 3D models, according to some embodiments.
- FIG. 10 illustrates a method for augmenting 3D models, according to some embodiments.
- a 3D reconstruction process can use 3D capturing or scanning techniques implemented on a 3D scanner to capture or receive 3D data (sometimes referred to as “images” generally, including depth data) of an environment that is used to generate a 2D or 3D model that can be displayed.
- the 2D or the 3D model can be a polygon-based model (e.g., a mesh model), a primitive-based model, and the like.
- the 3D reconstruction process can use 2D capturing or scanning techniques implemented on a 2D scanner to capture or receive 2D data (sometimes referred to as “images” generally, including visual data) of an environment that is used to generate a 2D or 3D model that can be displayed.
- the 2D or 3D model can be a polygon-based model (e.g., a mesh model), a primitive-based model, and the like.
- the 3D reconstruction process can include one or more subprocesses such as, for example, a capture subprocess, a reconstruction subprocess, a display subprocess, and the like.
- Examples of 3D capturing or scanning techniques include time-of-flight, tri angulation, structured light, modulated light, stereoscopic, photometric, photogrammetry, and the like.
- Examples of 3D data include depth data such as, for example, 3D point clouds, 3D line clouds, 3D meshes, 3D points, and the like.
- Examples of 3D models include mesh models (e.g., polygon models), surface models, wire-frame models, computer-aided-design (CAD) models, and the like.
- Examples of 2D capturing or scanning techniques include global shutter capture, rolling shutter capture, panoramic capture, wide-angle capture (e.g., 180 degree camera capture, 360 degree camera capture, etc.), image capture, video capture, and the like.
- Examples of 2D data include visual data such as, for example, image data, video data, and the like.
- the 3D reconstruction process can capture the 3D data and the 2D data synchronously or asynchronously.
- the 3D reconstruction process can capture data (e.g., the 3D data or the 2D data) at a fixed interval or as a function of movement of a scanner (e.g., the 3D scanner or the 2D scanner).
- the 3D reconstruction process can capture data based on translation thresholds, rotation thresholds, or both. For example, the 3D reconstruction process can capture data if the scanner translates more than a translation threshold, rotates more than a rotation threshold, or both.
- the 2D data, the 3D data, or both can be captured by a smartphone, a tablet computer, an augmented reality headset, a virtual reality headset, a drone, an aerial platform, and the like, or a combination thereof.
- the 2D data, the 3D data, or both can include a building object, for example an interior of the building object, an exterior of the building object, or both.
- the 2D data can be used to augment the 3D data.
- the 3D data can be textured based on the 2D data.
- a model (e.g., the 3D model or the 2D model) can be a floorplan representation of the environment.
- the floorplan can be an envelope representation including an outline of the environment, or a detailed representation including an outline of the environment and elements such as portals (e.g., doors, windows, space-to-space openings, and the like), interior walls, fixed furniture / appliances, and the like.
- the floorplan representation can include measurements, labels for the different spaces and elements within the environment, and the like. Examples of labels for the different spaces include entryway, reception, foyer, living room, family room, kitchen, bedroom, bathroom, closet, hallway, corridor, staircase, balcony, terrace, and the like. Examples of labels for the different elements include refrigerator, washer/dryer, dishwasher, range, microwave, range hood, wall oven, cooktop, toilet, sink, bath, exhaust fans, countertops, cabinets, and the like.
- Limitations of 3D capturing or scanning techniques can cause lines that are straight in the environment to appear distorted in the 3D data.
- the likelihood of distortion artifacts, such as wavy, broken, or disjointed geometry, in the 3D data increases which can lead to an inaccurate 3D model.
- the presence and magnitude of the distortion artifacts can depend on the 3D capturing or scanning techniques implemented on the 3D scanner.
- FIG. 1 illustrates 3D data of an interior environment, according to some embodiments.
- Window frame artifact 1002 and fridge artifact 1004 are examples of wavy, broken, or disjointed geometry due to sensor drift, false positives in feature matching, or noisy scene information.
- FIGS. 2A-2D illustrate 3D data of an interior environment, according to some embodiments.
- Wall artifact 2002 and wall artifact 2014 are examples of wavy, broken, or disjointed geometry due to sensor drift, false positives in feature matching, or noisy scene information.
- the likelihood of wavy, broken, or disjointed geometry in the 3D data can be mitigated by decreasing the distance between the 3D scanner and a surface in the environment. Decreasing the distance between the 3D scanner and a surface in the environment may be difficult in certain circumstances. For example, in an environment with a high vaulted ceiling, it may not be possible to decrease the distance between the 3D scanner and the ceiling as the 3D scanner may not be able to get close to the ceiling.
- the environment can include potentially problematic surfaces, such as reflective surfaces, dark surfaces, and clear or transparent surfaces, which can lead to artifacts, such as duplicative elements, missing data referred to as holes, or additional data, in the 3D data.
- reflective surfaces include mirrors, and the like.
- dark surfaces include television screens, dark tabletops or countertops, and the like.
- clear surfaces include glass, clear plastics, glass tabletops or countertops, and the like.
- first mirror 2008 and second mirror 2012 are examples of reflective surfaces that lead to duplicative elements in the 3D data referred to as first mirror artifact 2006 and second mirror artifact 2010, respectively.
- hole 1006 is an example of an area where there is no 3D data from capturing or scanning.
- hole 2004 is an example of an area where there is no 3D data from capturing or scanning.
- Known hole filling techniques may work well for holes that are mostly flat but may not work as well for holes that have an irregular shape or curvature. Regardless of situations in which they may or may not work well, known hole filling techniques can be resource intensive or computationally expensive.
- FIG. 3A illustrates 3D data of an interior environment, according to some embodiments.
- FIG. 3B illustrates visual data of a portion of the interior environment of FIG. 3 A, according to some embodiments.
- Visual data in FIG. 3B indicates that 3004 should be depicted as a solid wall, however, the reconstructed 3D data in FIG. 3A indicates that 3002, which is the same portion as 3004 of FIG. 3B, is a void.
- 3D reconstruction of the interior environment based on the 3D data of the interior environment illustrated in FIG. 3 A would result in a model including a void
- 3D reconstruction of the interior environment based on the visual data in FIG 3B would result in a wall. This is an example of different inputs producing different outputs.
- FIG. 4A illustrates a top down view of a capture (e.g., scan) process or phase of a 3D reconstruction process of an exterior, according to some embodiments.
- Structure 4000 is captured by a capture device at poses 4002-4008.
- FIGS. 4B-4E illustrate images captured by the capture device at poses 4002-4008, respectively. As illustrated in FIG. 4A, the capture device captures structure 4000 from the left and the front.
- FIG. 4F illustrates a front-left view of model 4010 and FIG. 4G illustrates a back-right view of model 4010.
- Modeled portions 4012 are portions of 3D model 4010 that are modeled based on the images captured by the capture device from the left and the front of structure 4000.
- Unmodeled portions 4014 are portions of 3D model 4010 that are unmodeled as there are no images captured by the capture device from the right and the back of structure 4000. In some embodiments, for example as illustrated in FIG. 4G, unmodeled portions 4014 are predicted geometries of surfaces.
- 3D capturing or scanning techniques can be prone to errors such as tracking error and drift.
- Tracking error can manifest when a scanner (e.g., 3D scanner or 2D scanner) implementing capturing or scanning techniques (e.g., 3D scanning techniques or 2D scanning techniques) loses track of its location in the environment.
- the scanner can lose track of its location in an environment that lacks features, such as in a hallway. All sensors produce measurement errors.
- the measurement errors can be amplified in capturing or scanning techniques that rely on previous sensor values to determine current sensor values.
- Drift is the deviation of sensor values over time due to accumulated measurement errors.
- Errors such as tracking error and drift can be minimized by capturing or scanning the environment one space at a time and combining the scans of each space into an aggregate scan that represents the environment.
- One space at a time capturing or scanning may minimize errors such as tracking error and drift, but may not maintain the relationship between the spaces and thus lead to an inaccurate aggregate scan.
- Augmenting one set of data (e.g., 3D data or 2D data) or one model (e.g., 3D model or 2D model) with another set of data (e.g., 3D data or 2D data) or another model (e.g., 3D model or 2D model) can address some of the aforementioned issues. Augmenting one set of data or one model with another set of data or another model can be used to revise, refine, or complete, the data or the models.
- the disclosure primarily relates to augmenting 3D data with a 3D model.
- One of ordinary skill in the art will appreciate that the principles disclosed herein can apply to various other combinations of augmentations between 2D data, 2D models, 3D data, and 3D models.
- 3D data of an interior environment can be augmented with 3D data of an exterior environment
- 3D data of an interior environment can be augmented with a 3D model of an exterior environment
- a 3D model of an interior environment can be augmented with 3D data of an exterior environment
- a 3D model of an interior environment can be augmented with a 3D model of an exterior environment, and the like.
- One of ordinary skill in the art will appreciate various other combinations of augmentations between 2D data, 2D models, 3D data, and 3D models.
- Augmenting one set of data or one model with another set of data or another model can include correlating or mapping, aligning, deforming, scaling, cropping, hole filling (e.g., completing), and the like.
- augmenting one set of data or model with another set of data or another model includes solving an optimization problem that includes finding the optimal solution from all feasible or possible solutions, for example given one or more constraints.
- FIG. 5A illustrates interior 3D data 5002 augmented with exterior 3D model 5004, according to some embodiments.
- FIG. 5B illustrates a magnified view of a portion of FIG. 5 A, according to some embodiments.
- interior 3D data 5002 and exterior 3D model 5004 can be captured or generated using a single 3D reconstruction process. In some embodiments, interior 3D data 5002 and exterior 3D model 5004 can be captured or generated using multiple, separate 3D reconstruction processes. In one example, interior 3D data 5002 can be captured using one 3D reconstruction process and exterior 3D model 5004 can be generated using another 3D reconstruction process.
- FIGS. 5A-5B and the disclosure herein is in relation to interior 3D data and an exterior 3D model, one of ordinary skill in the art will appreciate the principles described herein apply to other configurations (e.g., 3D data and 3D data, 3D data and 3D model, 3D model to 3D model, etc.).
- Interior 3D data 5002, or portions thereof, can be augmented with exterior 3D model 5004, or portions thereof. Augmenting interior 3D data 5002 with exterior 3D model 5004 can include correlating or mapping, aligning, deforming, scaling, cropping, hole filling (e.g., completing), and the like.
- the augmenting can include generating a common coordinate system for interior 3D data 5002 and exterior 3D model 5004.
- Interior 3D data 5002 can have an associated coordinate system (e.g., an interior coordinate system) and exterior 3D model 5004 can have an associated coordinate system (e.g., an exterior coordinate system).
- the common coordinate system can be generated based on the interior coordinate and the exterior coordinate system.
- the common coordinate system can be generated by matching the interior coordinate system with the exterior coordinate system, or vice versa.
- the augmenting can be based on location information associated with interior 3D data 5002 and exterior 3D model 5004. Examples of location information include latitude, longitude, elevation, and the like. Interior 3D data 5002 can be augmented with exterior 3D model 5004 relative to a common coordinate system based at least in part on location information.
- the augmenting can be based on one or more sides associated with interior 3D data 5002 and exterior 3D model 5004 where the sides correspond to the sides of the underlying building structure.
- interior 3D data 5002 can have a front side and exterior 3D model 5004 can have a front side.
- interior 3D data 5002 and exterior 3D model 5004 can be augmented by substantially aligning the front side of interior 3D data 5002 and the front side of exterior 3D model 5004 in a common coordinate system.
- the sides can be established or identified based on identified elements and their generally associated sides.
- a building structure may have several exterior doors, where a hinged door may be associated with a front side, and where a sliding door may be associated with a back side.
- a building structure may have several exterior windows, where a bay window may be associated with a front side.
- the augmenting can be based on an outline of interior 3D data 5002 and an outline of exterior 3D model 5004.
- the outline of interior 3D data 5002 is substantially aligned with the outline of exterior 3D model 5004, for example, based on one or more common architectural elements such as windows, and preferably those with industry standard attributes, such as doors, or based on one or more values derived from the architectural elements.
- a door of the outline of interior 3D data 5002 can be matched to a corresponding door of the outline of exterior 3D model 5004, and the outline of interior 3D data 5002 can be substantially aligned with the outline of exterior 3D model 5004 based on the matched door.
- a door of interior 3D data 5002 can be matched to a corresponding door of exterior 3D model 5004, interior 3D data 5002 can be substantially aligned with exterior 3D model 5004 based on the matched doors, an exterior wall thickness (i.e., thickness of the wall between interior 3D data 5002 and exterior 3D model 5004) can be derived based on the substantial alignment of interior 3D data 5002 with exterior 3D model 5004, and the outline of interior 3D data 5002 can be substantially aligned with the outline of exterior 3D data 5004 based on the derived exterior wall thickness.
- an exterior wall thickness i.e., thickness of the wall between interior 3D data 5002 and exterior 3D model 5004
- one or more architectural elements are substantially aligned according to axis alignment between the architectural elements of the two data sources. In some embodiments, this occurs after generating a common coordinate system. In some embodiments, axis alignment of matching architectural elements, or features thereof, generates the common coordinate system. For example, a window of 3D data 5002 having a planar orientation in an x-y plane is substantially aligned with the borders with a window of 3D model 5004 having matching planar orientation according to axes orientations. For planar architectural elements, in some examples this means two axes of the matching architectural elements are at least parallel to each other with corresponding points or features of the architectural element falling on the third orthogonal axis.
- the x-axis of a window in 3D data 5002 is parallel to the x-axis of a window in 3D model 5004, and the y- axis of a window in 3D data 5002 is parallel to the y-axis of a window in 3D model 5004, with the corners of the window each falling on the z-axis.
- architectural elements may substantially align with one another in such way, they are unlikely to perfect overlay one another due to distal surface separation. While axis alignment is discussed, point alignment or generation of lines between points may follow similar steps.
- the one or more substantially aligned architectural elements are orthogonal to one another.
- Outline of interior 3D data 5002 can be generated based on a top-down view of interior 3D data 5002.
- FIG. 6A it illustrates a top-down view of interior 3D data, according to some embodiments.
- Interior 3D data 5002 of FIGS. 5A-5B are of a different interior than interior 3D data of FIGS. 6A-6C.
- Outline of exterior 3D model 5004 can be generated based on a top-down view of exterior 3D model 5004.
- the augmenting can be based on one or more elements common to interior 3D data 5002 and exterior 3D model 5004.
- the elements are associated with a building object.
- the elements are associated with a structure of interest, for example of the building object.
- interior 3D data 5002 can be augmented with exterior 3D model 5004 based on doors or windows that are common to the building object.
- the elements are not associated with a structure of interest, for example of the building object.
- interior 3D data 5002 can be augmented with exterior 3D model 5004 based on vehicles, utility poles, trees, foliage, other structures, and the like, that are not associated with the building object.
- the augmenting based on elements common to interior 3D data 5002 and exterior 3D model 5004 can include identifying an aspect (e.g., plane) of an element of interior 3D data 5002 (such as by semantic segmentation or object recognition), identifying corresponding aspect (e.g., plane) of a corresponding element of exterior 3D model 5004 (such as by semantic segmentation or object recognition), and substantially aligning the aspect of the element of interior 3D data 5002 with the corresponding aspect of the corresponding element of exterior 3D model 5004.
- substantially aligning the aspect of the element of interior 3D data 5002 with the corresponding aspect of the corresponding element of exterior 3D data 5004 can be based on one or more assumptions. Examples of assumptions include door thickness, window thickness, wall thickness, and other anchoring aspects common to interior 3D data 5002 and exterior 3D model 5004.
- the augmenting can be used to compensate for limitations of different capturing or scanning techniques, different reconstructions techniques, different modeling techniques, or a combination thereof.
- the capturing or scanning technique, the reconstruction technique, the modeling technique, or a combination thereof, related to a first data capture may cause lines that are straight in the environment to appear wavy, broken, or disjointed in the resultant reconstructed output (such as interior 3D data 5002); however, the capturing scanning technique, the reconstruction technique, the modeling technique, or a combination thereof, related to a second data capture may cause the corresponding lines in the resultant modeled output (such as exterior 3D model 5004) to appear be straight.
- an interior 3D model generated from interior 3D data 5002 which can be a mesh input, may include lines that are wavy, broken, or disjointed; however, exterior 3D model 5004, which can be generated from primitive based modeling, may include straight lines.
- the augmenting can be used to compensate for potentially problematic surfaces in the environment.
- potentially problematic surfaces in an interior portion of the environment can lead to duplicative elements, missing data, or additional data in interior 3D data 5002; however, an exterior portion of the environment may not include the same potentially problematic surfaces and therefore exterior 3D model 5004 may not include the same duplicative elements, missing data, or additional data.
- Augmenting interior 3D data 5002 with exterior 3D data 5004 can correct the distortions in interior 3D data 5002 based on exterior 3D data 5004.
- the correlating or mapping can include assigning confidence values to elements in interior 3D data 5002, exterior 3D model 5004, or both.
- Elements can include points, surfaces, and the like.
- high confidence values can be assigned to elements that are common to both interior 3D data 5002 and exterior 3D model 5004, and, in some embodiments, low confidence values can be assigned to all other elements.
- the confidence values are based on co-visibility of the elements in interior 3D data 5002 and exterior 3D model 5004. For example, high confidence values can be assigned to doors and windows that are visible in both interior 3D data 5002 and exterior 3D model 5004.
- the confidence values are based on commonality of the elements in interior 3D data 5002 and exterior 3D model 5004, which may not necessarily be co-visible in interior 3D data 5002 and exterior 3D model 5004.
- high confidence values can be assigned to peripheral surfaces (e.g., interior walls) of interior 3D data 5002 that are common to surfaces (e.g., exterior walls) of exterior 3D model 5004.
- the augmenting can include identifying, correlating, and substantially aligning common elements in interior 3D data 5002 and exterior 3D model 5004, and revising interior 3D data 5002, exterior 3D model 5004, or both, based on the correlation or alignment.
- interior 3D data 5002 can include a portion of window 5008 and headboard 5010 directly below the portion of window 5008, and exterior 3D model 5004 can include all of window 5008.
- Window 5008 in interior 3D data 5002 and window 5008 in exterior 3D model 5004 can be correlated and aligned, and the portion of window 5008 in interior 3D data 5002 can be revised (e.g., filled in) based on window 5008 in exterior 3D model 5004.
- unmodeled portions 4014 are predicted geometries of surfaces.
- unmodeled portions 4014 are predicted geometries based on images illustrated in FIGS. 4B-4E.
- unmodeled portions 4014 are predicted geometries based on images illustrated in FIGS. 4B-4E in further view of interior 3D data, an interior 3D model, or both, of structure 4000.
- elements such as windows and doors that are common to both (exterior) model 4010 and the interior model can be used to correlate and align model 4010 and the interior model.
- the common elements may be those that are at the front and the left of structure 4000.
- Model 4010 can be revised based on the correlation or alignment of the common elements.
- unmodeled portions 4014 can be generated or filled in, for example with windows, doors, and the like, that are in the interior model.
- Interior 3D data 5002 can be offset from exterior 3D model 5004, for example, based on one or more common architectural elements such as windows, and preferably those with industry standard attributes, such as doors, or based on one or more values derived from the architectural elements.
- a door of interior 3D data 5002 can be matched to a corresponding door of exterior 3D model 5004, and interior 3D data 5002 can be offset from exterior 3D model 5004 based on an assumed thickness of the matched door (as doors are typically set to manufacturer and industry standards for consistency).
- a door of interior 3D data 5002 can be matched to a corresponding door of exterior 3D model 5004, interior 3D data 5002 can be substantially aligned with exterior 3D model 5004 based on the matched doors, an exterior wall thickness (i.e., thickness of the wall between interior 3D data 5004 and exterior 3D model 5004) can be derived based on the substantial alignment of interior 3D data 5002 with exterior 3D model 5004, and interior 3D data 5002 can be offset from exterior 3D data 5004 based on the derived exterior wall thickness.
- Wall offset 5006 is an example of an offset of interior 3D data 5002 relative to exterior 3D model 5004.
- interior 3D data 5002 can be captured following a one space at a time approach in which each space (e.g., room) is captured or scanned one at a time and the scans of each space are combined into an aggregate scan that represents the environment. As mentioned above, capturing or scanning in this way may not maintain the relationship between the spaces.
- One way to reintroduce the relationship between the spaces in interior 3D data 5002 can be my leveraging exterior 3D model 5004.
- the spaces in interior 3D data 5002 can be pulled apart or dilated based on exterior 3D model 5004 and, for example, aligning common one or more architectural elements. Pulling apart or dilating interior 3D data 5002 based on exterior 3D model 5004 in this manner can introduce interior wall offsets that are not present in the interior 3D data 5002 at the time of capture/aggregation.
- interior 3D data 5002, exterior 3D model 5004, both, or portions thereof, a coordinate system associated with interior 3D data 5002 (e.g., an interior coordinate system), a coordinate system associated with exterior 3D model 5004 (e.g., exterior coordinate system), or both, or a combination thereof, can be scaled based on one or more derived scaling factors.
- an interior scaling factor can be derived from interior 3D data 5002, interior coordinate system, or both, and interior 3D data 5002, interior coordinate system, exterior 3D model 5004, exterior coordinate system, or a combination thereof can be scaled based on the derived interior scaling factor.
- an exterior scaling factor can be derived from exterior 3D model 5004, exterior coordinate system, or both, and exterior 3D model 5004, exterior coordinate system, interior 3D data 5002, interior coordinate system, or a combination thereof can be scaled based on the derived exterior scaling factor.
- an interior scaling factor can be derived from interior 3D data 5002, interior coordinate system, or both, interior 3D data 5002, interior coordinate system, or both can be scaled based on the derived interior scaling factor
- an exterior scaling factor can be derived based on the interior scaling factor, the scaled interior 3D data 5002, the scaled interior coordinate system, or a combination thereof, and exterior 3D model, exterior coordinate system, or both can be scaled based on the derived exterior scaling factor.
- an exterior scaling factor can be derived from exterior 3D model 5004, exterior coordinate system, or both, exterior 3D model 5004, exterior coordinate system, or both can be scaled based on the derived exterior scaling factor
- an interior scaling factor can be derived from the exterior scaling factor, the scaled exterior 3D model 5004, the scaled exterior coordinate system, or both, and interior 3D data 5002, interior coordinate system, or both can be scaled based on the derived interior scaling factor.
- a quality metric, a confidence value, or both can be derived for and associated with interior 3D data 5002 and exterior 3D model 5004.
- visual data, depth data, or both from a ground- level imager may have a relatively high quality metric or high confidence value
- visual data, depth data, or both, from an aerial imager may have a relatively low quality metric or a low confidence value.
- the quality metric or the confidence value may be a function of distance from imager to subject.
- a 2D model such as an architectural plan may have a relatively high quality metric or high confidence value
- a 2D model such as a floor planed generated from visual data, depth data, or both, may have a relatively low quality metric or a low confidence value.
- a 3D model generated from an architectural plan may have a relatively high quality metric or high confidence value and a 3D model generated from visual data, depth data, or both, may have a relatively low quality metric or low confidence value.
- the data/model with a higher quality metric or confidence value can be used as the base data/model.
- interior 3D data 5002 can be the base data/model and in this example, an interior scaling factor can be derived from interior 3D data 5002, interior 3D data 5002 can be scaled based on the derived interior scaling factor, an exterior scaling factor can be derived based on the interior scaling factor, and exterior 3D model can be scaled based on the derived exterior scaling factor.
- exterior 3D model 5004 can be the base data/model and in this example, an exterior scaling factor can be derived from exterior 3D model 5004, exterior 3D model 5004 can be scaled based on the derived exterior scaling factor, an interior scaling factor can be derived from the exterior scaling factor, and interior 3D data 5002 can be scaled based on the derived interior scaling factor.
- deriving one scaling factor from another can include calculating a conversion factor to be applied to one scaling factor to derive another.
- deriving an exterior scaling factor from an interior scaling factor can include calculating a conversion factor to be applied to the interior scaling factor to derive the exterior scaling factor.
- deriving an interior scaling factor from an exterior scaling factor can include calculating a conversion factor to be applied to the exterior scaling factor to derive the interior scaling factor.
- deriving one scaling factor from another can be based on common elements.
- an interior scaling factor can be derived for interior 3D data 5002
- interior 3D data 5002 can be scaled based on the interior scaling factor
- an exterior scaling factor can be derived from the interior scaling factor based on window 5008 which is common to both interior 3D data 5002 and exterior 3D model 5004
- exterior 3D model 5004 can be scaled based on the exterior scaling factor.
- Deriving the exterior scaling factor from the interior scaling factor based on window 5008 which is common to both interior 3D data 5002 and exterior 3D model 5004 can include scaling window 5008 of exterior 3D model 5004 until its dimensions match that of window 5008 of the scaled interior 3D data 5002.
- deriving one scaling factor from another can be based on one or more industry standards.
- an interior scaling factor can be derived from interior 3D data 5002
- interior 3D data 5002 can be scaled based on the interior scaling factor
- an exterior scaling factor can be derived from the interior scaling factor such that exterior 3D model 5004 scaled based on the exterior scaling factor satisfies an industry standard exterior wall width/depth
- exterior 3D model 5004 can be scaled based on the exterior scaling factor.
- interior anchor poses for interior 3D data 5002 and exterior anchor poses for exterior 3D model 5004 are determined.
- a set of common anchor poses including anchor poses that are common to the interior anchor poses and the exterior anchor poses are determined.
- the 3D reconstruction process can include one or more subprocesses such as, for example, a reconstruction subprocess.
- the reconstruction subprocess can be manual, semi-automatic, or fully automatic.
- One or more tools may be used, for example by a human, in the reconstruction subprocess.
- FIG. 6A illustrates a top-down view of a floorplan representation generated based on 3D data (sometimes referred to as raw, unprocessed, or unstructured 3D data), according to some embodiments.
- the floorplan representation includes first bedroom 6002A, second bedroom 6002B, first bathroom 6004A, second bathroom 6004B, kitchen 6006, dining area 6008, living room 6010, and home office 6012.
- FIG. 6B illustrates a top-down view of a floorplan representation and augmented 3D data, according to some embodiments.
- FIG. 6C illustrates a perspective view of a floorplan representation and augmented 3D data, according to some embodiments.
- augmentation of the 3D data illustrated in FIG. 6A is a function of distance from cursor 6020.
- cursor 6020 has a 3D position (X, Y, Z). Rays can be casted in all directions from a position of cursor 6020.
- the rays that are casted from the position of cursor 6020 can be of a predetermined length.
- the first 3D data that the ray intersects can be augmented.
- the 3D data that is not the first 3D data can also be augmented.
- the augmentation can include, for example, brightness, opacity, and the like. Augmenting the 3D data in this manner can be a useful tool in assisting a human to identify and label the 3D data.
- IMU can include, among other components, one or more gyroscopes.
- a gyroscope measures rotation about a known point. Gyroscope measurements can drift over time due to integration of imperfections and noise within the gyroscope or, more generally, the IMU. Of the row axis, the pitch axis, and the yaw axis, it is the yaw axis that is most sensitive to drift. The drift can cause angular error. The angular error can be measured in degrees of rotation per unit of time.
- FIGS. 7A and 7B illustrate floorplan 700 with capture path 702 and capture path 752, respectively, according to some embodiments. Each capture path can include one or more rotations (sometimes referred to as scan directions).
- An angular error of a capture path can be related to or a function of an angular error of each rotation of the capture path.
- the angular error of a capture path can be an accumulation of angular errors the rotations of the capture path.
- the angular error of each rotation can have an associated magnitude and direction.
- the angular error of the capture path can also have an associated magnitude and direction.
- the capture path can include clockwise rotations and counterclockwise rotations. Each clockwise rotation can result in a positive angular error and each counterclockwise rotation can result in a negative angular error.
- Capture path 702 includes clockwise rotations 704, 706, 708, 712, and 714, and counterclockwise rotations 710 and 716. For the sake of simplicity, assuming the angular error of each rotation is of equal magnitude, the angular error of capture path 702 can be very positive.
- Capture path 752 includes clockwise rotations 754, 756, and 764, and counterclockwise rotations 758, 760, 762, and 766. For the sake of simplicity, assuming the angular error of each rotation is of equal magnitude, the angular error of capture path 752 can be slightly negative.
- the 3D reconstruction process can include determining and displaying a recommended or suggested rotation during 3D scanning, for example, in an effort to minimize drift or angular error.
- determining a recommended or suggested rotation can be based on one or more previous rotations. For example, determining a recommended or suggested rotation can be based on the magnitude, the direction, or both, of one or more previous rotations. In some embodiments, determining recommended or suggested rotations can be based on an angular error of one or more previous rotations. For example, determining a recommended or suggested rotation can be based on the magnitude, the direction, or both, of the angular error of one or more previous rotations.
- the angular error is zero.
- the angular error is slightly positive.
- a counterclockwise recommended or suggested rotation can be determined and displayed. The counterclockwise recommended or suggested rotation is in the opposite direction of the clockwise rotation 704 of capture path 702 and the clockwise rotation 754 of capture path 752 in an effort to lower the angular error from slightly positive to closer to zero.
- the angular error is slightly more positive. The counterclockwise recommended or suggested rotation is not followed.
- a counterclockwise recommended or suggested rotation can be determined and displayed.
- the counterclockwise recommended or suggested rotation is in the opposite direction of the clockwise rotations 704 and 706 of capture path 702 and clockwise rotations 754 and 756 of capture path 702 in an effort to lower the angular error from slightly more positive to closer to zero. If the counterclockwise recommended or suggested rotation is not followed, the next rotation is a clockwise rotation as illustrated by clockwise rotation 708 of capture path 702. If the counterclockwise recommended or suggested rotation is followed, the next rotation is a counterclockwise rotation as illustrated by counterclockwise rotation 758 of capture path 752.
- the 3D data can include private information in the environment. Examples of private information include personally identifiable information, pictures, medications, assistive devices or equipment, and the like.
- FIG. 8 illustrates a computer system 800 configured to perform any of the steps described herein.
- the computer system 800 includes an input/output (I/O) Subsystem 802 or other communication mechanism for communicating information, and a hardware processor, or multiple processors, 804 coupled with the I/O Subsystem 802 for processing information.
- the processor(s) 804 may be, for example, one or more general purpose microprocessors.
- the computer system 800 also includes a main memory 806, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to the I/O Subsystem 802 for storing information and instructions to be executed by processor 804.
- main memory 806 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor 804.
- Such instructions when stored in storage media accessible to the processor 804, render the computer system 800 into a special purpose machine that is customized to perform the operations specified in the instructions.
- the computer system 800 further includes a read only memory (ROM) 808 or other static storage device coupled to the I/O Subsystem 802 for storing static information and instructions for the processor 804.
- ROM read only memory
- a storage device 810 such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to the I/O Subsystem 802 for storing information and instructions.
- the computer system 800 may be coupled via the I/O Subsystem 802 to an output device 812, such as a cathode ray tube (CRT) or LCD display (or touch screen), for displaying information to a user.
- An input device 814 is coupled to the I/O Subsystem 802 for communicating information and command selections to the processor 804.
- control device 816 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processor 804 and for controlling cursor movement on the output device 812.
- This input/control device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
- a first axis e.g., x
- a second axis e.g., y
- the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.
- the computing system 800 may include a user interface module to implement a GUI that may be stored in a mass storage device as computer executable program instructions that are executed by the computing device(s).
- the computer system 800 may further, as described below, implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs the computer system 800 to be a special-purpose machine.
- the techniques herein are performed by the computer system 800 in response to the processor(s) 804 executing one or more sequences of one or more computer readable program instructions contained in the main memory 806. Such instructions may be read into the main memory 806 from another storage medium, such as storage device 810. Execution of the sequences of instructions contained in the main memory 806 causes the processor(s) 804 to perform the process steps described herein.
- hard-wired circuitry may be used in place of or in combination with software instructions.
- Various forms of computer readable storage media may be involved in carrying one or more sequences of one or more computer readable program instructions to the processor 804 for execution.
- the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer.
- the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line, cable, using a modem (or optical network unit with respect to fiber).
- a modem local to the computer system 800 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal.
- An infra red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on the I/O Subsystem 802.
- the I/O Subsystem 802 carries the data to the main memory 806, from which the processor 804 retrieves and executes the instructions.
- the instructions received by the main memory 806 may optionally be stored on the storage device 810 either before or after execution by the processor 804.
- the computer system 800 also includes a communication interface 818 coupled to the I/O Subsystem 802.
- the communication interface 818 provides a two-way data communication coupling to a network link 820 that is connected to a local network 822.
- the communication interface 818 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
- the communication interface 818 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN).
- LAN local area network
- Wireless links may also be implemented.
- the communication interface 818 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
- the computer system 800 can send messages and receive data, including program code, through the network(s), the network link 820 and the communication interface 818.
- a server 830 might transmit a requested code for an application program through the Internet 828, the ISP 826, the local network 822 and communication interface 818.
- the received code may be executed by the processor 804 as it is received, and/or stored in the storage device 810, or other non-volatile storage for later execution.
- Computing platform(s) 902 may be configured by machine-readable instructions 906.
- Machine-readable instructions 906 may include one or more instruction modules.
- the instruction modules may include computer program modules.
- the instruction modules may include one or more of image receiving module 908, model generating module 910, model augmentation module 912, system generating module 914, side identifying module 916, outline generating module 918, element match module 920, model alignment module 922, value derivation module 924, element identifying module 926, element correlation module 928, aspect identifying module 930, factor derivation module 932, image scaling module 934, subset selection module 936, angular error calculation module 938, rotation determination module 940, rotation display module 942, and/or other instruction modules.
- Image receiving module 908 may be configured to receive a first plurality of images.
- Image receiving module 908 may be configured to receive a second plurality of images.
- the first plurality of images and the second plurality of images may include at least one of visual data or depth data.
- the visual data may include at least one of image data or video data.
- the depth data may include at least one of point clouds, line clouds, meshes, or points.
- the first plurality of images and second plurality of images may be captured by one or more of a smartphone, a tablet computer, an augmented reality headset, a virtual reality headset, a drone, and an aerial platform.
- Each image of the first plurality of images and the second plurality of images may include a building object. Each image of the first plurality of images may include an interior of the building object. Each image of the second plurality of images may include an exterior of the building object.
- Model generating module 910 may be configured to generate a first 3D model based on the first plurality of images. Model generating module 910 may be configured to generate a second 3D model based on the second plurality of images.
- the first 3D model and the second 3D model may include at least one of a polygon-based model or a primitive-based model.
- the first 3D model and the second 3D model correspond to a building object.
- the first 3D model may correspond to an interior of the building object.
- the second 3D model may correspond to an exterior of the building object.
- Model augmentation module 912 may be configured to augment the first 3D model with the second 3D model.
- Side identifying module 916 may be configured to identify a first plurality of sides of the first 3D model.
- Side identifying module 916 may be configured to identify a second plurality of sides of the second 3D model.
- Each side of the first plurality of sides and the second plurality of sides may correspond to a side of a building object.
- Augmenting the first 3D model with the second 3D model may include substantially aligning the first plurality of sides with the second plurality of sides in a common coordinate system.
- Outline generating module 918 may be configured to generate the first outline of the first 3D model.
- Outline generating module 918 may be configured to generate the second outline of the second 3D model.
- Augmenting the first 3D model with the second 3D model may be based on a first outline of the first 3D model and a second outline of the second 3D model.
- Generating the first outline of the first 3D model may be based on a top-down view of the first 3D model.
- Generating the second outline of the second 3D model may be based on a top-down view of the second 3D model.
- Augmenting the first 3D model with the second 3D model may include substantially aligning the first outline of the first 3D model with the second outline of the second 3D model.
- Model alignment module 922 may be configured to substantially align the first outline of the first 3D model with the second outline of the second 3D model.
- Model alignment module 922 may be configured to substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on one or more architectural elements. Substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on the matched architectural element.
- Model alignment module 922 may be configured to substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on one or more values derived from one or more architectural elements.
- Element match module 920 may be configured to match an architectural element of the first 3D model with a corresponding architectural element of the second 3D model.
- Model alignment module 922 may be configured to substantially align the first 3D model with the second 3D model based on the matched architectural element.
- Value derivation module 924 may be configured to derive a value based on the substantial alignment of the first 3D model with the second 3D model.
- Model alignment module 922 may be configured to substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on the derived value.
- Element match module 920 may be configured to match an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images.
- Model alignment module 922 may be configured to substantially align the first 3D model with the second 3D model based on the matched architectural element.
- Value derivation module 924 may be configured to derive a value based on the substantial alignment of the first 3D model with the second 3D model.
- Model alignment module 922 may be configured to substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on the derived value.
- Element identifying module 926 may be configured to identify a first plurality of elements of the first 3D model. Element identifying module 926 may be configured to identify a second plurality of elements of the second 3D model. Identifying the first plurality of elements of the first 3D model may include semantically segmenting the first 3D model. Identifying the second plurality of elements of the second 3D model may include semantically segmenting the second 3D model. Identifying the first plurality of elements of the first 3D model may further include labeling the semantically segmented first 3D model. Identifying the second plurality of elements of the second 3D model may further include labeling the semantically segmented second 3D model.
- Element identifying module 926 may be configured to identify a third plurality of elements.
- the third plurality of elements may include elements common to the first plurality of elements and the second plurality of elements. Augmenting the first 3D model with the second 3D model may be based on the third plurality of elements.
- the third plurality of elements may include elements common to the first plurality of elements and the second plurality of elements.
- Aspect identifying module 930 may be configured to identify an aspect of an element of the first plurality of elements.
- Aspect identifying module 930 may be configured to identify a corresponding aspect of a corresponding element of the second plurality of elements. Augmenting the first 3D model with the second 3D model may include substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements.
- the aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements may be a plane.
- Element identifying module 926 may be configured to identify a first plurality of elements of the first plurality of images.
- Element identifying module 926 may be configured to identify a second plurality of elements of the second plurality of images. Identifying the first plurality of elements of the first plurality of images may include semantically segmenting each image of the first plurality of images. Identifying the first plurality of elements of the first plurality of images may further include labeling the semantically segmented first plurality of images. Identifying the second plurality of elements of the second plurality of images may include semantically segmenting each image of the second plurality of images. Identifying the second plurality of elements of the second plurality of elements may further include labeling the semantically segmented second plurality of elements.
- the first plurality of elements and the second plurality of elements may be associated with a building object.
- the first plurality of elements and the second plurality of elements may be associated with a structure of interest of the building object.
- the first plurality of elements and the second plurality of elements may be not associated with a building object.
- Element correlation module 928 may be configured to correlate the first plurality of elements with the second plurality of elements. Augmenting the first 3D model with the second 3D model may be based on the correlated plurality of elements.
- Aspect identifying module 930 may be configured to identify an aspect of an element of the first plurality of elements.
- Aspect identifying module 930 may be configured to identify a corresponding aspect of a corresponding element of the second plurality of elements. Augmenting the first 3D model with the second 3D model may include substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements.
- the aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements may be a plane.
- Augmenting the first 3D model with the second 3D model may include correlating the first 3D model with the second 3D model.
- Correlating the first 3D model with the second 3D model may include assigning a confidence value to each element of the first plurality of elements and the second plurality of elements. Assigning the confidence value to each element of the first plurality of elements and the second plurality of elements may be based on co-visibility of the first plurality of elements and the second plurality of elements. Assigning the confidence value to each element of the first plurality of elements and the second plurality of elements may be based on commonality of the first plurality of elements and the second plurality of elements.
- Augmenting the first 3D model with the second 3D model may include offsetting the first 3D model from the second 3D model. Offsetting the first 3D model from the second 3D model may be based on one or more architectural elements. Offsetting the first 3D model from the second 3D model may be based on the matched architectural element. Offsetting the first 3D model from the second 3D model may be based on the matched architectural element. Offsetting the first 3D model from the second 3D model may be based on one or more values derived from one or more architectural elements. Offsetting the first 3D model from the second 3D model may be based on the derived value. Augmenting the first 3D model with the second 3D model may include dilating the first 3D model based on the second 3D model.
- the first plurality of images may include a first plurality of anchor poses, wherein the second plurality of images includes a second plurality of anchor poses. Augmenting the first 3D model with the second 3D model may be based on anchor poses common to the first plurality of anchor poses and the second plurality of anchor poses.
- Factor derivation module 932 may be configured to derive a scaling factor based on at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model.
- Image scaling module 934 may be configured to scale at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model based on the derived scale factor.
- Subset selection module 936 may be configured to select a first subset of images of the first plurality of images based on at least one of translation data associated with the first plurality of images or rotation data associated with the first plurality of images. Generating the first 3D model may be based on the first subset of images. Subset selection module 936 may be configured to select a second subset of images of the second plurality of images based on at least one of translation data associated with the second plurality of images or rotation data associated with the second plurality of images. Generating the second 3D model may be based on the second subset of images.
- Angular error calculation module 938 may be configured to calculate a first angular error of a first capture path associated with the first plurality of images.
- Rotation determination module 940 may be configured to determine a suggested rotation based on the first angular error of the first capture path.
- Rotation display module 942 may be configured to display the suggested rotation.
- computing platform(s) 902, remote platform(s) 904, and/or external resources 944 may be operatively linked via one or more electronic communication links.
- electronic communication links may be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which computing platform(s) 902, remote platform(s) 904, and/or external resources 944 may be operatively linked via some other communication media.
- a given remote platform 904 may include one or more processors configured to execute computer program modules.
- the computer program modules may be configured to enable an expert or user associated with the given remote platform 904 to interface with system 900 and/or external resources 944, and/or provide other functionality attributed herein to remote platform(s) 904.
- a given remote platform 904 and/or a given computing platform 902 may include one or more of a server, a desktop computer, a laptop computer, a handheld computer, a tablet computing platform, a NetBook, a Smartphone, a gaming console, and/or other computing platforms.
- External resources 944 may include sources of information outside of system 900, external entities participating with system 900, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources 944 may be provided by resources included in system 900.
- Computing platform(s) 902 may include electronic storage 946, one or more processors 948, and/or other components. Computing platform(s) 902 may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of computing platform(s) 902 in FIG. 9 is not intended to be limiting. Computing platform(s) 902 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to computing platform(s) 902. For example, computing platform(s) 902 may be implemented by a cloud of computing platforms operating together as computing platform(s) 902.
- Electronic storage 946 may comprise non-transitory storage media that electronically stores information.
- the electronic storage media of electronic storage 946 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with computing platform(s) 902 and/or removable storage that is removably connectable to computing platform(s) 902 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
- a port e.g., a USB port, a firewire port, etc.
- a drive e.g., a disk drive, etc.
- Electronic storage 946 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
- Electronic storage 946 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).
- Electronic storage 946 may store software algorithms, information determined by processor(s) 948, information received from computing platform(s) 902, information received from remote platform(s) 904, and/or other information that enables computing platform(s) 902 to function as described herein.
- Processor(s) 948 may be configured to provide information processing capabilities in computing platform(s) 902.
- processor(s) 948 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
- processor(s) 948 is shown in FIG. 9 as a single entity, this is for illustrative purposes only.
- processor(s) 948 may include a plurality of processing units. These processing units may be physically located within the same device, or processor(s) 948 may represent processing functionality of a plurality of devices operating in coordination.
- Processor(s) 948 may be configured to execute modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942, and/or other modules.
- Processor(s) 948 may be configured to execute modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942, and/or other modules by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s) 948.
- module may refer to any component or set of components that perform the functionality attributed to the module. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components.
- modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 are illustrated in FIG. 9 as being implemented within a single processing unit, in implementations in which processor(s) 948 includes multiple processing units, one or more of modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 may be implemented remotely from the other modules.
- modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 described below is for illustrative purposes, and is not intended to be limiting, as any of modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 may provide more or less functionality than is described.
- modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 may be eliminated, and some or all of its functionality may be provided by other ones of modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942.
- FIG. 10 illustrates a method 1000 for augmenting 3D models, in accordance with one or more implementations.
- the operations of method 1000 presented below are intended to be illustrative. In some implementations, method 1000 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1000 are illustrated in FIG. 10 and described below is not intended to be limiting.
- method 1000 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the one or more processing devices may include one or more devices executing some or all of the operations of method 1000 in response to instructions stored electronically on an electronic storage medium.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 1000.
- An operation 1002 may include receiving a first plurality of images. Operation 1002 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to image receiving module 908, in accordance with one or more implementations.
- An operation 1004 may include generating a first 3D model based on the first plurality of images. Operation 1004 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to model generating module 910, in accordance with one or more implementations.
- An operation 1006 may include receiving a second plurality of images. Operation 1006 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to image receiving module 908, in accordance with one or more implementations.
- An operation 1008 may include generating a second 3D model based on the second plurality of images. Operation 1008 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to model generating module 910, in accordance with one or more implementations.
- An operation 1010 may include augmenting the first 3D model with the second 3D model. Operation 1010 may be performed by one or more hardware processors configured by machine- readable instructions including a module that is the same as or similar to model augmentation module 912, in accordance with one or more implementations.
- All of the processes described herein may be embodied in, and fully automated, via software code modules executed by a computing system that includes one or more computers or processors.
- the code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.
- a processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like.
- a processor can include electrical circuitry configured to process computer-executable instructions.
- a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions.
- a processor can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, one or more microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry.
- a computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
- Disjunctive language such as the phrase "at least one of X, Y, or Z," unless specifically stated otherwise, is understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (for example, X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
- a device configured to are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations.
- a processor configured to carry out recitations A, B and C can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
Landscapes
- Engineering & Computer Science (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- Automatic Disk Changers (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22838433.5A EP4367640A1 (en) | 2021-07-08 | 2022-07-07 | Methods, storage media, and systems for augmenting data or models |
CA3221270A CA3221270A1 (en) | 2021-07-08 | 2022-07-07 | Methods, storage media, and systems for augmenting data or models |
AU2022306583A AU2022306583A1 (en) | 2021-07-08 | 2022-07-07 | Methods, storage media, and systems for augmenting data or models |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163219804P | 2021-07-08 | 2021-07-08 | |
US63/219,804 | 2021-07-08 | ||
US202263358716P | 2022-07-06 | 2022-07-06 | |
US63/358,716 | 2022-07-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023283377A1 true WO2023283377A1 (en) | 2023-01-12 |
Family
ID=84800970
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/036416 WO2023283377A1 (en) | 2021-07-08 | 2022-07-07 | Methods, storage media, and systems for augmenting data or models |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4367640A1 (en) |
AU (1) | AU2022306583A1 (en) |
CA (1) | CA3221270A1 (en) |
WO (1) | WO2023283377A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110181589A1 (en) * | 2010-01-28 | 2011-07-28 | The Hong Kong University Of Science And Technology | Image-based procedural remodeling of buildings |
US20120314924A1 (en) * | 2011-03-29 | 2012-12-13 | Boston Scientific Neuromodulation Corporation | System and method for atlas registration |
US8818768B1 (en) * | 2010-10-12 | 2014-08-26 | Google Inc. | Modeling three-dimensional interiors from photographic images, and applications thereof |
US20160378887A1 (en) * | 2015-06-24 | 2016-12-29 | Juan Elias Maldonado | Augmented Reality for Architectural Interior Placement |
US10621779B1 (en) * | 2017-05-25 | 2020-04-14 | Fastvdo Llc | Artificial intelligence based generation and analysis of 3D models |
US20200372708A1 (en) * | 2008-11-05 | 2020-11-26 | Hover Inc. | Systems and methods for generating three dimensional geometry |
-
2022
- 2022-07-07 CA CA3221270A patent/CA3221270A1/en active Pending
- 2022-07-07 WO PCT/US2022/036416 patent/WO2023283377A1/en active Application Filing
- 2022-07-07 EP EP22838433.5A patent/EP4367640A1/en active Pending
- 2022-07-07 AU AU2022306583A patent/AU2022306583A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200372708A1 (en) * | 2008-11-05 | 2020-11-26 | Hover Inc. | Systems and methods for generating three dimensional geometry |
US20110181589A1 (en) * | 2010-01-28 | 2011-07-28 | The Hong Kong University Of Science And Technology | Image-based procedural remodeling of buildings |
US8818768B1 (en) * | 2010-10-12 | 2014-08-26 | Google Inc. | Modeling three-dimensional interiors from photographic images, and applications thereof |
US20120314924A1 (en) * | 2011-03-29 | 2012-12-13 | Boston Scientific Neuromodulation Corporation | System and method for atlas registration |
US20160378887A1 (en) * | 2015-06-24 | 2016-12-29 | Juan Elias Maldonado | Augmented Reality for Architectural Interior Placement |
US10621779B1 (en) * | 2017-05-25 | 2020-04-14 | Fastvdo Llc | Artificial intelligence based generation and analysis of 3D models |
Also Published As
Publication number | Publication date |
---|---|
EP4367640A1 (en) | 2024-05-15 |
AU2022306583A1 (en) | 2023-12-14 |
CA3221270A1 (en) | 2023-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6879891B2 (en) | Methods and systems for completing point clouds using plane segments | |
US11688138B2 (en) | Methods and systems for detecting and combining structural features in 3D reconstruction | |
US11449926B1 (en) | Image-based rendering of real spaces | |
US10896497B2 (en) | Inconsistency detecting system, mixed-reality system, program, and inconsistency detecting method | |
JP6907325B2 (en) | Extraction of 2D floor plan from 3D grid representation of interior space | |
Turner et al. | Fast, automated, scalable generation of textured 3D models of indoor environments | |
US11632602B2 (en) | Automated determination of image acquisition locations in building interiors using multiple data capture devices | |
JP6843237B2 (en) | A system and method for expressing the point cloud of the scene | |
US11380078B2 (en) | 3-D reconstruction using augmented reality frameworks | |
WO2013162735A1 (en) | 3d body modeling from one or more depth cameras in the presence of articulated motion | |
WO2022156755A1 (en) | Indoor positioning method and apparatus, device, and computer-readable storage medium | |
Kim et al. | Interactive acquisition of residential floor plans | |
US20230035601A1 (en) | Floorplan Generation System And Methods Of Use | |
US20140168204A1 (en) | Model based video projection | |
US20160372156A1 (en) | Image fetching for timeline scrubbing of digital media | |
US20230206393A1 (en) | Automated Building Information Determination Using Inter-Image Analysis Of Multiple Building Images | |
Placitelli et al. | Low-cost augmented reality systems via 3D point cloud sensors | |
WO2023283377A1 (en) | Methods, storage media, and systems for augmenting data or models | |
WO2024011063A1 (en) | Methods, storage media, and systems for combining disparate 3d models of a common building object | |
Cheng et al. | Texture mapping 3d planar models of indoor environments with noisy camera poses | |
Kim et al. | Planar Abstraction and Inverse Rendering of 3D Indoor Environments | |
US20230394746A1 (en) | Multi-Room 3D Floor Plan Generation | |
US20230394765A1 (en) | Generation of 3D Room Plans With 2D Shapes and 3D Primitives | |
CA3201746A1 (en) | 3-d reconstruction using augmented reality frameworks | |
NZ739813B2 (en) | Methods and systems for detecting and combining structural features in 3d reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22838433 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022306583 Country of ref document: AU Ref document number: AU2022306583 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 3221270 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2022306583 Country of ref document: AU Date of ref document: 20220707 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022838433 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022838433 Country of ref document: EP Effective date: 20240208 |