US20230259667A1 - Generating measurements of physical structures and environments through automated analysis of sensor data - Google Patents

Generating measurements of physical structures and environments through automated analysis of sensor data Download PDF

Info

Publication number
US20230259667A1
US20230259667A1 US18/307,270 US202318307270A US2023259667A1 US 20230259667 A1 US20230259667 A1 US 20230259667A1 US 202318307270 A US202318307270 A US 202318307270A US 2023259667 A1 US2023259667 A1 US 2023259667A1
Authority
US
United States
Prior art keywords
interior space
panorama image
computing device
outputs
inspection platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US18/307,270
Other versions
US11960799B2 (en
Inventor
Victor Palmer
Vu Tran
Brian Webb
Brian Keller
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Flyreel Inc
Original Assignee
Flyreel Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flyreel Inc filed Critical Flyreel Inc
Priority to US18/307,270 priority Critical patent/US11960799B2/en
Assigned to Flyreel, Inc. reassignment Flyreel, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TRAN, VU, KELLER, BRIAN, PALMER, VICTOR, WEBB, BRIAN
Publication of US20230259667A1 publication Critical patent/US20230259667A1/en
Application granted granted Critical
Publication of US11960799B2 publication Critical patent/US11960799B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/08Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken
    • G01C11/10Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken using computers to control the position of the pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P13/00Indicating or recording presence, absence, or direction, of movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P15/00Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
    • G01P15/18Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration in two or more dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Definitions

  • FIG. 5 includes a flow diagram of a process for facilitating a guided procedure for modeling an interior space of a structure using an inspection platform that is executing on a computing device.
  • interior space may refer to a three-dimensional (3D) space that is enclosed by a floor, ceiling, and walls.
  • 3D three-dimensional
  • interior space may be used interchangeably with the term “room.” Note that an interior space need not be completely bounded by walls on all sides, as the teachings of the present disclosure can be applied to interior spaces that are partially or fully enclosed by walls.
  • exterior space may be a space that is external to a structure of interest. Examples of exterior spaces include driveways, decks, and the like.
  • the computer program can then determine the locations of physical features, such as junctures, based on an analysis of the equirectangular panorama that includes at least a portion of the structure.
  • the term “juncture” may refer to any location where a pair of walls join, intersect, or otherwise merge or converge with one another. Note that the term “juncture” is intended to cover corners where the walls form acute, obtuse, or reflex angles, so the teachings of the present disclosure are applicable to structures regardless of their particular configuration.
  • the computer program may identify the boundaries of objects that are present in the equirectangular panorama. For example, the computer program may estimate the dimensions of objects to a scale factor such that final measurements are regressed relative to the distance between the computing device and ground.
  • the present disclosure covers two actions that can be performed in sequence as part of a measurement operation.
  • First, a panorama is created of structure.
  • the first action could be performed in a roughly similar manner regardless of the nature of the structure (e.g., whether an interior space or exterior space is being imaged).
  • the second action may depend on the nature of the structure, however. If the structure is an interior space, for example, there may be several assumptions (e.g., fixed ceiling height, bounded by junctures, etc.) that may not be applicable to exterior spaces. Accordingly, how the panorama is analyzed may depend on the nature of the structure being imaged.
  • references in the present disclosure to “an embodiment” or “some embodiments” mean that the feature, function, structure, or characteristic being described is included in at least one embodiment. Occurrences of such phrases do not necessarily refer to the same embodiment, nor are they necessarily referring to alternative embodiments that are mutually exclusive of one another.
  • the interfaces 104 may be accessible via a web browser, desktop application, mobile application, or over-the-top (OTT) application.
  • OTT over-the-top
  • a user may access an interface that is generated by a mobile application executing on a mobile phone.
  • This interface may also be accessible via the web browser executing on the mobile phone.
  • the interfaces 104 may be viewed on a mobile phone, a tablet computer, a wearable electronic device (e.g., a watch or fitness accessory), or a virtual or augmented reality system (e.g., a head-mounted display).
  • the computing device 200 may include a proximity sensor whose output is indicative of proximity of the computing device 200 to a nearest obstruction within the field of view of the proximity sensor.
  • a proximity sensor may include, for example, an emitter that is able to emit infrared (IR) light and a detector that is able to detect reflected IR light that is returned toward the proximity sensor. These types of proximity sensors are sometimes called laser imaging, detection, and ranging (LiDAR) scanners.
  • the computing device 200 may include an ambient light sensor whose output is indicative of the amount of light in the ambient environment.
  • the inspection platform 214 is referred to as a computer program that resides within the memory 204 .
  • the inspection platform 214 could be comprised of software, firmware, or hardware that is implemented in, or accessible to, the computing device 200 .
  • the inspection platform 214 may include a processing module 216 , coordinating module 218 , locating module 220 , and graphical user interface (GUI) module 222 .
  • Each of these modules can be an integral part of the inspection platform 214 .
  • these modules can be logically separate from the inspection platform 214 but operate “alongside” it. Together, these modules enable the inspection platform 214 to generate measurements of a physical space, as well as objects contained therein, in an automated manner by guiding a user through a measurement operation.
  • the inspection platform 214 may not only be responsible for determining the dimensions of a physical space, but also identifying objects contained therein in some embodiments.
  • the inspection platform 214 may include a computer vision module that applies one or more types of computer vision models to a panorama provided as input, so as to identify one or more types of objects contained in the corresponding physical space.
  • the computer vision models may include multi-class classification, object detection, semantic segmentation, or other deep learning based computer vision approaches to identify multiple types of objects (e.g., different brands of a single object, or different types of objects that are commonly found in the same physical space).
  • the inspection platform establishes the layout of the interior space based on the series of outputs (step 705 ). For example, the inspection platform may calculate the dimensions of the interior space based on the series of outputs and then create a 2D floor plan for the interior space that is based on the dimensions. As another example, the inspection platform may calculate the dimensions of the interior space based on the series of outputs and then create a 3D floor plan for the interior space that is based on the dimensions. Whether the floor plan is 2D or 3D may depend on the type(s) of junctures that are predicted by the inspection platform. Moreover, the inspection platform can encode information regarding the layout in a data structure that is associated with the interior space (step 706 ). Step 706 of FIG. 7 may be substantially similar to step 506 of FIG. 5 .

Abstract

Introduced here computer programs and associated computer-implemented techniques for generating measurements of physical structures and environments in an automated matter through analysis of data that is generated by one or more sensors included in a computing device. This can be accomplished by combining insights that are derived through analysis different types of data that are generated, computed, or otherwise obtained by a computing device. For instance, a computer program may enable or facilitate measurement of arbitrary dimensions, angles, and square footage of a physical structure based on (i) images generated by an image sensor included in the corresponding computing device and (ii) measurements generated by an inertial sensor included in the corresponding computing device.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation application claiming priority to U.S. patent application Ser. No. 17/500,128, filed 13 Oct. 2021, and published as U.S. Patent Application Publication No. US20220114298 on 14 Apr. 2022, which is incorporated by reference herein in its entirety. U.S. patent application Ser. No. 17/500,128 claims priority to U.S. Provisional Application No. 63/091,149, titled “System and Method for Generating Automated Structural Measurements” and filed on Oct. 13, 2020, which is incorporated by reference herein in its entirety.
  • TECHNICAL FIELD
  • Various embodiments concern computer programs and associated computer-implemented techniques for generating measurements of physical structures and environments in an automated matter.
  • BACKGROUND
  • Conventionally, dimensions of physical structures and environments have been measured using implements such as tape measures, yard sticks, rulers, and the like. These implements are useful for measuring the linear distance between a first location and a second location in Euclidean space (e.g., along a plane) along a straight line. There are several notable downsides to using these implements, however. Measuring not only tends to be inconsistent due to the reliance on the person(s) using the implement, but can also be difficult when portions of the physical structure or environment being measured are occluded or occupied.
  • There have been several attempts to address these downsides through the development of computer programs that can be executed by mobile computing devices (or simply “computing devices”). One attempt involved the development of a computer program that prompted the user to orient the camera of a computing device toward an object to be measured and then requested that the user interact with a digital image generated by the camera so as to indicate the bounds of the object. Another approach involved the development of a computer program that prompted the user to orient the camera of a computing device toward an object to be measured and then requested that the user provide a reference measurement for another object contained in a digital image generated by the camera.
  • Computer programs such as these are much more convenient than conventional implements as they are readily downloadable by anyone with a computing device. However, there still tends to be a large amount of inconsistency that results in incorrect measurements. Much of this inconsistency is due to the degree to which users are still involved in the measurement process. For example, a user may imprecisely indicate the bounds of an object to be measured, or a user may input an incorrect reference measurement by mistake.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • This patent or application publication contains at least one drawing executed in color. Copies of this patent or application publication with color drawings will be provided by the Office upon request and payment of the necessary fee.
  • FIG. 1 illustrates a network environment that includes an inspection platform.
  • FIG. 2 illustrates an example of a computing device that is able to implement an inspection platform designed to establish the layout of a physical space associated with a structure.
  • FIG. 3 depicts an example of a communication environment that includes an inspection platform implemented on a computing device.
  • FIG. 4 includes a flow diagram of a process for automatically extracting measurements of a structure to enable measurement of arbitrary dimensions, angles, or square footage through analysis of image data and inertial data generated by a computing device.
  • FIG. 5 includes a flow diagram of a process for facilitating a guided procedure for modeling an interior space of a structure using an inspection platform that is executing on a computing device.
  • FIG. 6 illustrates how each juncture may be represented using a line that overlays a panorama image (or simply “panorama”).
  • FIG. 7 includes a flow diagram of another process for facilitating a guided procedure for modeling an interior space using an inspection platform that is executing on a computing device.
  • FIG. 8 includes a flow diagram of a process for identifying objects that are contained in a physical space for which a panorama is available.
  • FIG. 9 is a block diagram illustrating an example of a processing system in which at least some operations described herein can be implemented.
  • Various features of the technology described herein will become more apparent to those skilled in the art from a study of the Detailed Description in conjunction with the drawings. Various embodiments are depicted in the drawings for the purpose of illustration. However, those skilled in the art will recognize that alternative embodiments may be employed without departing from the principles of the technology. Accordingly, although specific embodiments are shown in the drawings, the technology is amenable to various modifications.
  • DETAILED DESCRIPTION
  • Introduced here are computer programs that are able to generate measurements of physical structures and environments in an automated matter through analysis of data that is generated by one or more sensors included in a computing device. As further discussed below, these computer programs may be able to accomplish this by combining insights that are derived through analysis different types of data that are generated, computed, or otherwise obtained by a computing device. For instance, a computer program may enable or facilitate measurement of arbitrary dimensions, angles, and square footage of a physical structure (or simply “structure”) based on (i) images generated by an image sensor included in the corresponding computing device and (ii) measurements generated by an inertial sensor (also referred to as a “motion sensor”) included in the corresponding computing device. This approach is advantageous since the images provide a visual representation of the structure up to an unknown scale factor, while the inertial measurements (also referred to as “motion measurements”) provide an estimate of the unknown scale factor. Together, these data enable estimates of absolute measurements on the structure.
  • For the purpose of illustration, embodiments may be described in the context of generating measurements of a structure in an automated manner. As an example, a computer program may be designed to establish the height, width, and depth of the structure so as to establish its dimensions. However, those skilled in the art will recognize that features of those embodiments may be similarly applicable to generating measurements of a physical environment (or simply “environment”).
  • Accordingly, the computer programs described herein may be able to generate measurements of interior spaces of structures, exterior spaces of structures, or salient objects present therein. The term “interior space” may refer to a three-dimensional (3D) space that is enclosed by a floor, ceiling, and walls. The term “interior space” may be used interchangeably with the term “room.” Note that an interior space need not be completely bounded by walls on all sides, as the teachings of the present disclosure can be applied to interior spaces that are partially or fully enclosed by walls. Meanwhile, the term “exterior space” may be a space that is external to a structure of interest. Examples of exterior spaces include driveways, decks, and the like.
  • As an example, assume that a computer program executing in a computing device receives input that is representative of a request to measure a structure. Normally, this input will correspond to a user either initiating (i.e., opening) the computer program or interacting with the computer program in such a manner so as to indicate that she is interested in measuring the structure.
  • Thereafter, the computer program may instruct the user to situate the computing device in a predetermined orientation with respect to the structure. For example, the user may be instructed to situate the computing device so that image data generated by an image sensor contained therein is captured in a horizontal orientation relative to the structure. As part of a measurement operation, the computer program may cause the image sensor to generate image data that corresponds to at least a portion of the structure. Generally, the image data is representative of a series of digital images (or simply “images”) that are generated by the image sensor in succession. As part of the measurement operation, the computer program may also obtain inertial measurement data (also referred to as “IMU data” or “inertial data”) that is representative of measurements generated by one or more motion sensors contained in the computing device. The inertial data may be temporally aligned with the image data. Thus, the inertial data may be generated by the motion sensor(s) contemporaneously with the image data generated by the image sensor.
  • The computer program can then generate an image of the structure based on the image data and inertial data. For example, the computer program may programmatically combine the image data and inertial data so as to create a panorama image (or simply “panorama”) having an equirectangular projection by estimating, based on the image data, the focal lengths needed to generate the equirectangular panorama and then determining, based on the inertial data, an approximate location of each image included in the image data relative to the geography of the structure.
  • The computer program can then determine the locations of physical features, such as junctures, based on an analysis of the equirectangular panorama that includes at least a portion of the structure. The term “juncture” may refer to any location where a pair of walls join, intersect, or otherwise merge or converge with one another. Note that the term “juncture” is intended to cover corners where the walls form acute, obtuse, or reflex angles, so the teachings of the present disclosure are applicable to structures regardless of their particular configuration. Moreover, the computer program may identify the boundaries of objects that are present in the equirectangular panorama. For example, the computer program may estimate the dimensions of objects to a scale factor such that final measurements are regressed relative to the distance between the computing device and ground.
  • Over the course of the measurement operation, relevant information may be posted to an interface that is presented by the computing device for consideration by the user. As an example, a manipulable two-dimensional (2D) or 3D image may be posted to the interface and then updated in near real time as the computer program generates measurements for the structure and objects present therein. This manipulable image may also be useful in identifying which portions of the structure have not yet been imaged.
  • At a high level, the present disclosure covers two actions that can be performed in sequence as part of a measurement operation. First, a panorama is created of structure. Second, information regarding the structure—like its dimensions—is determined or inferred through analysis of the panorama. The first action could be performed in a roughly similar manner regardless of the nature of the structure (e.g., whether an interior space or exterior space is being imaged). The second action may depend on the nature of the structure, however. If the structure is an interior space, for example, there may be several assumptions (e.g., fixed ceiling height, bounded by junctures, etc.) that may not be applicable to exterior spaces. Accordingly, how the panorama is analyzed may depend on the nature of the structure being imaged.
  • Embodiments may be described in the context of executable instructions for the purpose of illustration. However, those skilled in the art will recognize that aspects of the technology could be implemented via hardware, firmware, or software. As an example, a computer program that is representative of a software-implemented inspection platform (or simply “inspection platform”) designed to facilitate measuring of the interior spaces or exterior spaces of structures may be executed by the processor of a computing device. This computer program may interface, directly or indirectly, with hardware, firmware, or other software implemented on the computing device. For instance, this computer program may interact with an image sensor that is able to generate image data from which a panorama can be constructed, a motion sensor that generates measurements indicative of motion of the computing device, etc.
  • Terminology
  • References in the present disclosure to “an embodiment” or “some embodiments” mean that the feature, function, structure, or characteristic being described is included in at least one embodiment. Occurrences of such phrases do not necessarily refer to the same embodiment, nor are they necessarily referring to alternative embodiments that are mutually exclusive of one another.
  • The term “based on” is to be construed in an inclusive sense rather than an exclusive sense. That is, in the sense of “including but not limited to.” Thus, unless otherwise noted, the term “based on” is intended to mean “based at least in part on.”
  • The terms “connected,” “coupled,” and variants thereof are intended to include any connection or coupling between two or more elements, either direct or indirect. The connection or coupling can be physical, logical, or a combination thereof. For example, elements may be electrically or communicatively coupled to one another despite not sharing a physical connection.
  • The term “module” may refer broadly to software, firmware, hardware, or combinations thereof. Modules are typically functional components that generate one or more outputs based on one or more inputs. A computer program may include or utilize one or more modules. For example, a computer program may utilize multiple modules that are responsible for completing different tasks, or a computer program may utilize a single module that is responsible for completing all tasks.
  • When used in reference to a list of multiple items, the word “or” is intended to cover all of the following interpretations: any of the items in the list, all of the items in the list, and any combination of items in the list.
  • Overview of Inspection Platform
  • FIG. 1 illustrates a network environment 100 that includes an inspection platform 102. Individuals (also referred to as “users”) can interface with the inspection platform 102 via interfaces 104. For example, a user may be able to access an interface through which information regarding a structure can be input. For instance, the user may specify the same of an interior space whose dimensions are to be measured, or the user may provide information regarding the property (e.g., an address, number of occupants, construction materials, insurance provider) or its owner (e.g., name, insurance account number). As another example, a user may be able to access an interface through which feedback is provided as images (e.g., panoramas) of a structure are generated. These interfaces 104 may also permit users to view 2D and 3D representations of structures, as well as manage preferences. The term “user,” as used herein, may refer to a homeowner, business owner, assessor insurance adjuster (also referred to as a “claims adjuster”), or another individual with an interest in generating measurements for a structure.
  • As shown in FIG. 1 , the inspection platform 102 may reside in a network environment 100. Thus, the computing device on which the inspection platform 102 is implemented may be connected to one or more networks 106 a-b. These networks 106 a-b may be personal area networks (PANs), local area networks (LANs), wide area networks (WANs), metropolitan area networks (MANs), cellular networks, or the Internet. Additionally or alternatively, the inspection platform 102 can be communicatively coupled to one or more computing devices over a short-range wireless connectivity technology, such as Bluetooth®, Near Field Communication (NFC), Wi-Fi® Direct (also referred to as “Wi-Fi P2P”), and the like.
  • The interfaces 104 may be accessible via a web browser, desktop application, mobile application, or over-the-top (OTT) application. For example, in order to complete the measurement operation further described below, a user may access an interface that is generated by a mobile application executing on a mobile phone. This interface may also be accessible via the web browser executing on the mobile phone. Accordingly, the interfaces 104 may be viewed on a mobile phone, a tablet computer, a wearable electronic device (e.g., a watch or fitness accessory), or a virtual or augmented reality system (e.g., a head-mounted display).
  • In some embodiments, at least some components of the inspection platform 102 are hosted locally. That is, part of the inspection platform 102 may reside on the computing device that is used to access the interfaces 104. For example, the inspection platform 102 may be embodied as a mobile application that is executable by a mobile phone. Note, however, that the mobile application may be communicatively connected to a server system 108 on which other components of the inspection platform 102 are hosted.
  • In other embodiments, the inspection platform 102 is executed entirely by a cloud computing service operated by, for example, Amazon Web Services®, Google Cloud Platform™, or Microsoft Azure®. In such embodiments, the inspection platform 102 may reside on a server system 108 that is comprised of one or more computer servers. These computer server(s) can include different types of data (e.g., spatial coordinates for junctures, dimensions, images), algorithms for processing the data, structure information (e.g., address, construction date, construction material, insurance provider), and other assets. Those skilled in the art will recognize that this information could also be distributed amongst the server system 108 and one or more computing devices. For example, some data that is generated by the computing device on which the inspection platform 102 resides may be stored on, and processed by, that computing device for security or privacy purposes.
  • FIG. 2 illustrates an example of a computing device 200 that is able to implement an inspection platform 214 designed to establish the layout of a physical space associated with a structure. This physical space could be an interior space or exterior space. The inspection platform 214 can establish the layout based on an analysis of image(s) of the physical space. As further discussed below, these image(s) can be acquired during a guided measurement operation in which a user is prompted to pan the computing device 200 to capture a panorama of the physical space. The term “panorama” may be used to refer to an image that represents a wide view of the physical space. Normally, panoramas offer an unbroken view of at least a 180 degree field of view (FOV) along the horizontal plane, though panoramas could offer an unbroken view of a 360 degree FOV along the horizontal plane. Through analysis of a panorama, the inspection platform 214 may be able to determine locations of junctures that correspond to the periphery of the physical space and then infer its dimensions (and thus, its layout) based on those locations.
  • The computing device 200 can include a processor 202, memory 204, display 206, communication module 208, image sensor 210, and sensor suite 212. Each of these components is discussed in greater detail below. Those skilled in the art will recognize that different combinations of these components may be present depending on the nature of the computing device 200.
  • The processor 202 can have generic characteristics similar to general-purpose processors, or the processor 202 may be an application-specific integrated circuit (ASIC) that provides control functions to the computing device 200. As shown in FIG. 2 , the processor 202 can be coupled to all components of the computing device 200, either directly or indirectly, for communication purposes.
  • The memory 204 may be comprised of any suitable type of storage medium, such as static random-access memory (SRAM), dynamic random-access memory (DRAM), electrically erasable programmable read-only memory (EEPROM), flash memory, or registers. In addition to storing instructions that can be executed by the processor 202, the memory 204 can also store data generated by the processor 202 (e.g., when executing the modules of the inspection platform 214). Note that the memory 204 is merely an abstract representation of a storage environment. The memory 204 could be comprised of actual memory chips or modules.
  • The display 206 can be any mechanism that is operable to visually convey information to a user. For example, the display 206 may be a panel that includes light-emitting diodes (LEDs), organic LEDs, liquid crystal elements, or electrophoretic elements. In some embodiments, the display 206 is touch sensitive. Thus, a user may be able to provide input to the inspection platform 214 by interacting with the display 206.
  • The communication module 208 may be responsible for managing communications between the components of the computing device 200, or the communication module 208 may be responsible for managing communications with other computing devices (e.g., server system 108 of FIG. 1 ). The communication module 208 may be wireless communication circuitry that is designed to establish communication channels with other computing devices. Examples of wireless communication circuitry include integrated circuits (also referred to as “chips”) configured for Bluetooth, Wi-Fi, NFC, and the like.
  • The image sensor 210 may be any electronic sensor that is able to detect and convey information in order to generate image data. Examples of image sensors include charge-coupled device (CCD) sensors and complementary metal-oxide semiconductor (CMOS) sensors. The image sensor 210 may be implemented in a camera module (or simply “camera”) that is implemented in the computing device 200. In some embodiments, the image sensor 210 is one of multiple image sensors implemented in the computing device 200. For example, the image sensor 210 could be included in a front- or rear-facing camera on a mobile phone.
  • Other sensors may also be implemented in the computing device 200. Collectively, these sensors may be referred to as the “sensor suite” 212 of the computing device 200. For example, the computing device 200 may include a motion sensor whose output is indicative of motion of the computing device 200 as a whole. Examples of motion sensors include multi-axis accelerometers and gyroscopes. In some embodiments, the motion sensor is implemented in an inertial measurement unit (IMU) that measures the force, angular rate, or orientation of the computing device 200. The IMU may accomplish this through the use of one or more accelerometers, one or more gyroscopes, one or more magnetometers, or any combination thereof. As another example, the computing device 200 may include a proximity sensor whose output is indicative of proximity of the computing device 200 to a nearest obstruction within the field of view of the proximity sensor. A proximity sensor may include, for example, an emitter that is able to emit infrared (IR) light and a detector that is able to detect reflected IR light that is returned toward the proximity sensor. These types of proximity sensors are sometimes called laser imaging, detection, and ranging (LiDAR) scanners. As another example, the computing device 200 may include an ambient light sensor whose output is indicative of the amount of light in the ambient environment.
  • For convenience, the inspection platform 214 is referred to as a computer program that resides within the memory 204. However, the inspection platform 214 could be comprised of software, firmware, or hardware that is implemented in, or accessible to, the computing device 200. In accordance with embodiments described herein, the inspection platform 214 may include a processing module 216, coordinating module 218, locating module 220, and graphical user interface (GUI) module 222. Each of these modules can be an integral part of the inspection platform 214. Alternatively, these modules can be logically separate from the inspection platform 214 but operate “alongside” it. Together, these modules enable the inspection platform 214 to generate measurements of a physical space, as well as objects contained therein, in an automated manner by guiding a user through a measurement operation.
  • The processing module 216 can process data obtained by the inspection platform 214 into a format that is suitable for the other modules. For example, the processing module 216 may apply operations to images generated by the image sensor 210 in preparation for analysis by the other modules of the inspection platform 214. Thus, the processing module 216 may despeckle, denoise, or otherwise filter images that are generated by the image sensor 210. Additionally or alternatively, the processing module 216 may adjust properties like contrast, saturation, and gain in order to improve the outputs produced by the other modules of the inspection platform 214.
  • The processing module 216 may also proceed data obtained from the sensor suite 212 in preparation for analysis by the other modules of the inspection platform 214. As further discussed below, the inspection platform 214 may utilize data that is generated by a motion sensor in order to better understand data that is generated by the image sensor 210. For example, the inspection platform 214 may programmatically combine images generated by the image sensor 210 based on measurements generated by the motion sensor, so as to create a panorama of the physical space. Moreover, the inspection platform 214 may determine, based on the measurements, an approximate location of each image generated by the image sensor 210 and then use those insights to estimate dimensions of the physical space and objects contained therein. To accomplish this, the measurements generated by the motion sensor must be temporally aligned with the images generated by the image sensor 210. The processing module 216 may be responsible for ensuring that these data are temporally aligned with one another, such that the inspection platform 214 can readily identify the measurement(s) that correspond to each image.
  • The coordinating module 218 may be responsible for determining and/or cataloguing the locations of junctures. Assume, for example, that a user is interested in establishing the dimensions of a physical space. The periphery of the physical space may be defined by junctures, each of which represents a location where at which a pair of surfaces are joined. In order to “map” the periphery of the physical space, the inspection platform 214 may request that the user locate the computing device 200 in a certain position (e.g., proximate the center of the physical space) and then capture a panorama of the physical space by panning the computing device 200. The coordinating module 218 may be responsible for determining, based on an analysis of the panorama, where the junctures of the physical space are located. As further discussed below, this can be accomplished by applying a trained model to the panorama. The trained model may produce, as output, coordinates indicating where a juncture is believed to be located based on pixel-level examination of the panorama. Normally, the trained model will produce a series of outputs that are representative of different junctures of the physical space. Using the series of outputs, the coordinating module 218 can “reconstruct” the physical space, thereby establishing its dimensions. In some embodiments, outputs produced by the sensor suite 212 are used to facilitate the process. For example, a measurement produced by a motion sensor or proximity sensor may allow the coordinating module 218 to gain greater insight into where the computing device 200 is located in the physical space (and thus how to establish the dimensions).
  • The measuring module 220 can examine the locations of junctures determined by the coordinating module 218 in order to derive insights into the physical space. For example, the measuring module 220 may calculate a dimension of the physical space based on a comparison of multiple locations (e.g., a width defined by a pair of wall-wall boundaries, or a height defined by the floor-wall and ceiling-wall boundaries). As another example, the measuring module 220 may generate a 2D or 3D layout using the locations. Thus, the measuring module 220 may be able to construct a 2D or 3D model of the physical space based on insights gained through analysis of a single panorama. In some embodiments, the measuring module 220 is also responsible for cataloging the locations of junctures determined by the coordinating module 218. Thus, the measuring module 220 may store the locations in a data structure that is associated with either the physical space or a building with which the physical space is associated. Information derived by the measuring module 220, such as dimensions and layouts, can also be stored in the data structure. In some embodiments each location is represented using a coordinate system (e.g., a geographic coordinate system such as the Global Positioning System) that is associated with real-world positions, while in other embodiments each location is represented using a coordinate system that is associated with the surrounding environment. For example, the location of each juncture may be defined with respect to the location of the computing device 200.
  • The GUI module 222 may be responsible for generating interfaces that can be presented on the display 206. Various types of information can be presented on these interfaces. For example, information that is calculated, derived, or otherwise obtained by the coordinating module 218 and/or measuring module 220 may be presented on an interface for display to the user. As another example, visual feedback may be presented on an interface so as to indicate to the user whether the measurement procedure is being completed properly.
  • Other modules could also be included in the inspection platform 214. As further discussed below, the inspection platform 214 may not only be responsible for determining the dimensions of a physical space, but also identifying objects contained therein in some embodiments. In such embodiments, the inspection platform 214 may include a computer vision module that applies one or more types of computer vision models to a panorama provided as input, so as to identify one or more types of objects contained in the corresponding physical space. The computer vision models may include multi-class classification, object detection, semantic segmentation, or other deep learning based computer vision approaches to identify multiple types of objects (e.g., different brands of a single object, or different types of objects that are commonly found in the same physical space). The computer vision module may also combine several individual computer vision models together to improve accuracy of detection (e.g., apply an object detection model to identify and localize a specific object and then apply a classification model on the localized object to determine kind or quality). These computer vision models may be stored in the memory 204 of the computing device 200, or these computer vision models may be stored in a remote memory that is accessible to the computing device 200 (e.g., via the communication module 208).
  • FIG. 3 depicts an example of a communication environment 300 that includes an inspection platform 302 implemented on a computing device. As shown in FIG. 3 , the inspection platform 302 may receive and then process several types of data. Here, for example, the inspection platform 302 receives image data 306 and sensor data 308.
  • These data are generally obtained from different sources. For example, the image data 306 may include images that are generated by an image sensor (e.g., image sensor 210 of FIG. 2 ) that is implemented in the computing device 304. These images may be discrete images that are captured in rapid succession by the image sensor, or these images may be the individual frames of a video feed that is produced by the image sensor. The sensor data 308 may be obtained from one or more of the sensors included in the sensor suite (e.g., sensor suite 212 of FIG. 2 ) of the computing device 304. For example, the sensor data 308 may include measurements that are generated by a motion sensor, such as an accelerometer or gyroscope, as the computing device 304 is used to generate the image data 306.
  • Methodologies for Automated Measurement Estimation
  • FIG. 4 includes a flow diagram of a process 400 for automatically extracting measurements of a structure to enable measurement of arbitrary dimensions, angles, or square footage through analysis of image data and inertial data generated by a computing device. As an example, the process 400 could be used to establish the height, width, and depth of an interior space of a building, an exterior space of the building, or the building itself. Those skilled in art will recognize that the process 400 may be able for use in the insurance industry for assisting in tasks related to insurance claim processes and/or underwriting processes. However, the process 400 is not limited to the insurance industry, and thus could be utilized in any field or scenario where providing automated measurements of a structure is beneficial.
  • Initially, an inspection platform will receive input that represents a request to measure at least one dimension of a structure (step 401). Generally, the input is representative of a request to measure a portion of the structure, such as an interior space or exterior space. The input normally corresponds to a user either initiating (i.e., opening) the inspection platform or interacting with the inspection platform in such a manner so as to indicate that she is interested in measuring the structure. For example, the user may interact with a digital element labeled “Initiate Measuring” or “Begin Measuring” that is viewable on an interface generated by the inspection platform. Alternatively, this input could correspond to an instruction that is provided by either a server system to which the computing device is connected or the computing device itself. For example, the server system may transmit an instruction to initiate the measurement operation to the inspection platform responsive to a determination that certain conditions have been met, the user has indicated a willingness to complete the measurement operation, etc.
  • The inspection platform can then obtain image data that is generated by an image sensor included in the computing device (step 402). The image data may include images of at least a portion of the structure to be measured. These images may be representative of static images captured in rapid succession by the image sensor, or these images may be representative of the frames of a video captured by the image sensor. Preferably, the inspection platform may guide the user through capturing the image data at various points of rotation of the computing device to ensure collection of a full 360-degree lateral (i.e., left and right relative to the structure) space and 180-degree vertical (i.e., up and down relative to the structure) space.
  • Moreover, the inspection platform can obtain inertial data that is generated by an inertial sensor included in the computing device (step 403). As mentioned above, the term “inertial sensor” may be used interchangeably with the term “motion sensor.” The nature of the inertial data will depend on the nature of the inertial sensor. For example, in embodiments where the inertial sensor is an IMU that includes one or more accelerometers, one or more gyroscopes, one or more magnetometers, or combinations thereof, the inertial data may include a discrete series of measurements in temporal order. Each measurement may indicate a characteristic of motion of the computing device. For example, the measurements may indicate rotational velocity and/or acceleration of the computing device over an interval of time corresponding to generation of the image data.
  • It is to be appreciated that the generation of image data and inertial data may be performed simultaneously by the computing device. As further discussed below, the temporal relationship between the image data and inertial data may allow the inspection platform to gain insights into the position of the computing device with respect to the structure.
  • The inspection platform can then combine, based on the inertial data, the different images included in the image data to create a panorama of at least a portion of the structure (step 404). For the purpose of illustration, the processes herein are described in the context of panoramas; however, those skilled in the art will recognize that the processes may be similarly applicable to images that are not wide-angle representations of a structure. As an example, the inspection platform may employ algorithms that are designed to combine or “stitch” the different images together so as to create an equirectangular projection of the structure. In some embodiments, the algorithms overcome the limitations of existing approaches by using line matching techniques in contrast to the feature matching techniques that are normally practiced by those existing approaches. At a high level, line matching techniques may allow the algorithms to optimally perform relative to the structures that are typically of interest (e.g., the interior spaces of buildings). In operation, the algorithms may estimate the focal length in a manner that is unique with respect to existing approaches. For example, through line matching, the algorithms may estimate the focal length of the image sensor associated with generation of the panorama and then further utilize the measurements included in the inertial data to determine an approximate location of each captured image relative to geography associated with the structure.
  • Thereafter, the inspection platform can determine the location of junctures of the structure (step 405), preferably utilizing the panorama of at least a portion of the structure. For example, the inspection platform may utilize the panorama to identify the locations of junctures that are representative of floor-wall, ceiling-wall, or wall-wall boundaries. Moreover, the inspection platform may utilize the panorama to compute, infer, or otherwise establish characteristics of other physical features of the structure. For example, the inspection platform may be able to determine the curvature of the surfaces—namely, the walls, ceilings, or floors—that make up the structure. Thus, in accordance with various embodiments, the algorithms executed by the inspection platform may preferably operate on the panorama to determine the locations of structural intersections like junctures. After determining the location of the junctures of the structure, the inspection platform can determine structural aspects (also referred to as “physical aspects”) of the structure (step 406). For example, the inspection platform may determine the location of all walls, ceilings, and floors in a 3D spatial representation by combining its knowledge of juncture locations and surface curvature.
  • In some embodiments, the inspection platform is further configured to estimate the boundaries of some or all of the objects that are present in the panorama (step 407). When identifying the boundaries of objects contained in the panorama, the inspection platform may identify salient objects (e.g., fixtures, appliances, furniture, animals) that are present in the panorama. The boundaries of a given object could be identified, for example, by estimating the 3D dimensions of aspects present in the imaged portion of the structure and then using those aspects for reference. Preferably, the estimated dimensions (and measurements of aspects of the structure) are estimated to a scale factor for consistency purposes. The final scaled values may be regressed relative to the distance between the computing device and ground. Additionally or alternatively, the final scaled values may be regressed relative to the sizes of the salient objects identified in the panorama (and thus present in the structure). As part of the measurement operation, the inspection platform may also be configured to geographically label—a process referred to as “geo-tagging”-one or more aspects of the structure or the salient objects contained therein.
  • The inspection platform can then produce an output that is representative of the insights gained through analysis of the panorama (step 408). As an example, the inspection platform may generate a manipulable 3D image that is presented by the computing device for display to the user. The manipulable 3D image may depict user- or platform-selected dimensions estimated for the structure or the salient objects contained therein. Generally, the manipulable 3D image is generated by combining the determined physical aspects of the structure with the panorama, so as to enable the user to manipulate the 3D image relative to a selected viewing angle. Measurements determined by the inspection platform (e.g., through analysis of the panorama) can be overlaid on the manipulable 3D image, utilizing the estimated object boundaries that are present in the panorama image. To permit manipulation, the interface on which the 3D image is presented may be a “point-and-click interface” that is responsive to selections made by the user. For example, the user may be permitted to identify additional measurements to be generated by selecting one or more locations within the bounds of the 3D image. These location(s) may be associated with physical aspects of the structure, or these location(s) may be associated with a salient object contained in the structure.
  • In some embodiments, the inspection platform generates and then provides an overlay on the manipulable 3D image that is meant to controllably guide the user through the image data capture process. Additionally or alternatively, visual indicia may be provided on the interface to indicate determined structural damage relating to physical aspects of the structure. Assume, for example, that in the process of determining the physical aspects of the structure as discussed above with reference to step 406, the inspection platform discovers that a given physical aspect is abnormal in appearance. This can be accomplished by determining (e.g., based on an output produced by a computer vision model) that the given physical aspect is dissimilar from other physical aspects of the same type. This abnormality may be indicative of damage to the given physical aspect. In such a situation, a visual indicium (e.g., a bounding box or digital element in a certain color) may be provided on the interface to identify the abnormality. This can be done by the inspection platform to visually identify the risks, hazards, features, or material types that are associated with internal and external spaces. Examples of risks and hazards for interior spaces include exposed wires, presence of animals, issues related to water heaters, presence or absence of non-combustible materials, wood-burning stoves, issues related to plumbing, mold, water damage (e.g., stains), issues related to washing machines, presence or absence of vacant rooms, issues related to electrical components, and the like. Examples of risks and hazards for exterior spaces include presence of a deck, steps, stairs, porch, or railings, presence of a pool or pool-related features and components, presence of yard-related features and components, presence of propane task, presence of natural features (e.g., lakes, ponds, or streams) in proximity to a structure, condition of exterior portion of a structure, signage associated with a structure, presence of cages or enclosures for animals, presence of boarded ingress points (e.g., doors and windows), presence of clutter or debris, type of structure (e.g., whether the structure is a mobile or modular home), presence of tarps (e.g., on the roof), presence of bars (e.g., on the doors or windows), and the like.
  • Note that the types of objects contained in a structure will normally depend on the nature of the structure. If, for example, the imaged portion of the structure is an interior space, then the panorama may be of a bathroom with corresponding fixtures, a kitchen with corresponding fixtures, a laundry room with corresponding fixtures, a common area with associated fixtures, a utility closet with associated fixtures, etc. Meanwhile, if the imaged portion of the structure is an exterior space, then the panorama may be of the construction materials used for exterior construction, outbuildings (e.g., garages and playscapes), yard components (e.g., water features and garden features), exterior components (e.g., faucets, gutters, and condensate drain lines), etc.
  • FIG. 5 includes a flow diagram of a process 500 for facilitating a guided procedure for modeling an interior space of a structure using an inspection platform that is executing on a computing device. While the process 500 is described in the context of an interior space, those skilled in the art will recognize that aspects of the process 500 may be similarly applicable to modeling an exterior space. Initially, the inspection platform will receive input that is indicative of a request to establish the layout of an interior space that includes a series of junctures, each of which represents a point at which a different pair of surfaces are joined (step 501). Step 501 of FIG. 5 may be substantially similar to step 401 of FIG. 4 .
  • In some embodiments, a user may be able to indicate whether she is interested in establishing a 2D or 3D layout of the interior space. If the user is interested in establishing a 2D layout of the interior space, the inspection platform may only catalogue, store, or otherwise record information regarding wall-wall boundaries. Conversely, if the user is interested in establishing a 3D layout of the interior space, the inspection platform may catalogue, store, or otherwise record information regarding floor-wall, ceiling-wall, and wall-wall boundaries.
  • The inspection platform can then instruct the user to generate a panorama by panning the computing device while an image sensor generates one or more images of the interior space (step 502). To generate a panorama, the computing device will normally generate at least two images of the interior space and then combine or “stitch” those images together to create the panorama. Accordingly, the panorama may be representative of multiple images with overlapping portions that are joined together—usually by the operating system of the computing device—to collectively represent the interior space.
  • Then, the inspection platform can apply a trained model to the panorama to produce an output that is representative of a juncture predicted by the trained model based on an analysis of the panorama (step 503). Further information regarding models that are trained to identify junctures can be found in U.S. application Ser. No. 17/401,912, titled “Semi-Supervised 3D Indoor Layout Estimation from a Single 360 Degree Panorama,” which is incorporated by reference herein in its entirety. The trained model may output a matrix in which each entry indicates whether the corresponding pixel corresponds to a juncture. If an entry indicates that the corresponding pixel does correspond to a juncture, the value may also indicate the type of juncture (e.g., whether the juncture is representative of a floor-wall, ceiling-wall, or wall-wall boundary). With these values, the inspection platform may be able to define the juncture in the context of the panorama, for example, by identifying which pixels correspond to the juncture.
  • Moreover, the inspection platform may cause display of at least a portion of the panorama with a graphical element overlaid thereon to indicate a location of the juncture that is predicted by the trained model (step 504). Normally, the output is one of a series of outputs produced by the trained model, each of which is representative of a separate juncture that is predicted by the trained model based on an analysis of the panorama. Accordingly, while each juncture may be represented using a line that overlays the panorama, the panorama may have one or more bounding boxes overlaid thereon as shown in FIG. 6 . Each bounding box may have four sides, namely, a first side that is representative of the floor-wall boundary, a second side that is representative of the ceiling-wall boundary, and third and fourth sides that are representative of wall-wall boundaries. In a spatial sense, the first and second sides are normally roughly parallel to one another and roughly orthogonal to the third and fourth sides. However, the bounding box may not be rectangular due to the distortion of the panorama. As can be seen in FIG. 6 , the bounding boxes that define the walls of an interior space tend to “bulge” along at least one side.
  • Graphical elements may be overlaid on the panorama after the panorama has been fully captured, or graphical elements may be overlaid on the panorama as the panorama is being captured. For example, while the user pans the computing device so that the periphery of the interior space can be viewed by the image sensor, the panorama may be shown on the display. In such embodiments, the inspection platform may apply the trained model to the panorama in real time (e.g., to columns of pixels) and then populate graphical elements as appropriate.
  • The inspection platform may also be able to establish the layout of the interior space based on the outputs produced by the trained model (step 505). As mentioned above, the outputs may be representative of junctures that are predicted by the trained model based on an analysis of the panorama. For each juncture, the inspection platform may infer, predict, or otherwise establish a spatial position in the context of a coordinate system (e.g., defined with respect to the interior space). With these spatial positions, the inspection platform can determine the layout of the interior space. Thereafter, the inspection platform may encode information regarding the layout in a data structure that is associated with the interior space (step 506). For example, the inspection platform may encode the spatial positions of the junctures, the dimensions of the walls, the height of the ceiling, etc. The data structure is normally stored in a memory that is internal to the computing device on which the inspection platform is executing. However, the inspection platform could alternatively or additionally cause transmission of the data structure to a destination external to the computing device (step 507). For example, the data structure could be transmitted to another computing device (e.g., server system 108 of FIG. 1 ) to which the computing device is communicatively connected.
  • FIG. 7 includes a flow diagram of another process 700 for facilitating a guided procedure for modeling an interior space using an inspection platform that is executing on a computing device. Again, while the process 700 is described in the context of an interior space, aspects of the process 700 may be similarly applicable to modeling an exterior space. Initially, the inspection platform may receive input that is indicative of a request from a user to establish the layout of an interior space that includes a series of junctures (step 701), each of which represents a point at which a different pair of surfaces are joined. Consider, for example, a rectangular room with four 90-degree corners. This rectangular room comprises various junctures of different types. There are four wall-wall boundaries at which different pairs of walls join together. Along each wall, there is also a floor-wall boundary and a ceiling-wall boundary. Together, these various junctures define the periphery of the interior space from a 3D perspective.
  • Thereafter, the inspection platform may instruct the user to generate a panorama by panning an image sensor across the interior space (step 702). This can be accomplished using the camera housed in a computing device that is associated with the user. In some embodiments the inspection platform resides on the computing device, while in other embodiments the inspection platform resides on another computing device to which the computing device is communicatively connected. In embodiments where the inspection platform resides on the computing device associated with the user, the inspection platform may configure a capture session so that the panorama is made available by the operating system after being generated by the camera. For instance, the inspection platform may cause the capture session to be customized by configuring a capture parameter of the camera based on a characteristic of the interior space. As an example, the resolution, focus, or flash could be altered based on the ambient light level, distance to wall, location in interior space, etc. These characteristics of the interior space may be determined based on an output produced by a sensor included in a sensor suite (e.g., sensor suite 212 of FIG. 2 ) of the computing device.
  • The inspection platform can then acquire the panorama of the interior space that is generated by the image sensor (step 703). In embodiments where the inspection platform resides on the computing device that was used to capture the panorama, the inspection platform may obtain the panorama directly from the operating system. Alternatively, the inspection platform may receive the panorama from across a network (e.g., via communication module 208 of FIG. 2 ). Thereafter, the inspection platform can apply a trained model to the panorama to produce a series of outputs (step 704). Each output in the series of outputs may be representative of a juncture that is predicted by the trained model based on an analysis of the panorama. The trained model may be configured to perform pixel-wise classification of pixel data corresponding to the panorama in a columnar manner to produce the series of outputs.
  • In some embodiments, the inspection platform establishes the layout of the interior space based on the series of outputs (step 705). For example, the inspection platform may calculate the dimensions of the interior space based on the series of outputs and then create a 2D floor plan for the interior space that is based on the dimensions. As another example, the inspection platform may calculate the dimensions of the interior space based on the series of outputs and then create a 3D floor plan for the interior space that is based on the dimensions. Whether the floor plan is 2D or 3D may depend on the type(s) of junctures that are predicted by the inspection platform. Moreover, the inspection platform can encode information regarding the layout in a data structure that is associated with the interior space (step 706). Step 706 of FIG. 7 may be substantially similar to step 506 of FIG. 5 .
  • In some embodiments, the inspection platform is configured to provide feedback to the user during the capture session in which the panorama is captured. For example, the inspection platform may cause display of the panorama on an interface (step 707) and then indicate a location of each juncture that is predicted by the trained model by overlaying at least one bounding box on the panorama (step 708). As shown in FIG. 6 , the perimeter of each bounding box is typically defined by a set of four outputs. Normally, the set of four outputs includes a first output representing a floor-wall boundary, a second output representing a ceiling-wall boundary, and third and fourth outputs representing different wall-wall boundaries. In some embodiments, steps 707-708 are performed in near real time. Thus, bounding boxes may be overlaid on the panorama as the image sensor is panned across the interior space (and the panorama is presented on the display of the computing device). In other embodiments, steps 707-708 are performed after the panorama is generated. For example, steps 707-708 may be performed by the inspection platform in response to receiving input indicative of a selection of the panorama by the user.
  • FIG. 8 includes a flow diagram of a process 800 for identifying objects that are contained in a physical space for which a panorama is available. This physical space could be an interior space or exterior space. In some embodiments objects are identifying by applying computer vision models directly to the panorama, while in other embodiments the panorama is projected back into a normal perspective image before the computer vision models are applied. Additionally or alternatively, computer vision models may be applied to some or all of the images that collectively form the panorama.
  • Initially, an inspection platform can acquire a panorama of the physical space that is generated by an image sensor of a computing device (step 801). As discussed above with reference to FIGS. 4-5 and 7 , the panorama is normally generated as part of a measurement operation in which a user pans the image sensor across the physical space. For example, the user may be guided by the inspection platform to systematically pan the image sensor across a portion of the physical space (e.g., so as to image lateral space in 360 degrees), or the user may be guided by the inspection platform to systematically pan the image sensor across the entire physical space (e.g., so as to image lateral space in 360 degrees and vertical space in 360 degrees).
  • Thereafter, the inspection platform may apply one or more classification models to the panorama (step 802), so as to identify one or more types of objects included in the panorama. Each classification model may be designed and then trained to identify a different type of object. For example, the classification models may include a first classification model that is trained to identify stoves, a second classification model that is trained to identify water heaters, a third classification model that is trained to identify water stains, etc. Accordingly, the classification models may be trained to identify objects that are associated with some degree of risk, as well as visual indicia of damage caused by those objects.
  • The classification models that are applied to the panorama may depend on the nature of the physical space. Assume, for example, that a user specifies that she is interested in generating a panorama of a kitchen and then proceeds to image the kitchen as discussed above. In such a situation, the inspection platform may only apply those classification models that have been deemed suitable or appropriate for kitchens. As an example, the inspection platform may apply classification models associated with stoves, refrigerators, and water stains but not washing machines or porches. This may advantageously allow the inspection platform to conserve computational resources since not every classification model is necessarily applied to every panorama obtained by the inspection platform.
  • To determine which classification models, if any, are appropriate for a given physical space, the inspection platform may access a data structure in which each classification model is programmatically associated with one or more types of physical spaces. For example, the classification model trained to identify stoves may be labelled as appropriate for kitchens, while the classification model trained to identify water stains may be labelled as appropriate for kitchens, bathrooms, bedrooms, and exterior spaces. These relationships may be manually defined (e.g., at the time of training), or these relationships may be automatically determined by the inspection platform (e.g., based on an analysis of a pool of panoramas to which each classification model available to the inspection platform may be applied).
  • Much like the trained model discussed above with reference to FIGS. 5 and 7 , each classification model may produce, as output, a bounding box or pixel-wise classification that defines the pixels determined to represent an instance of the corresponding type of object. For a given panorama, the inspection platform may obtain several bounding boxes that are associated with different types of objects. For example, in the event that the panorama is associated with a kitchen, the inspection platform may ultimately obtain a first bounding box that identifies the stove, a second bounding box that identifies the refrigerator, a third bounding box that identifies a water stain, etc.
  • In some embodiments, the inspection platform is also configured to determine the dimensions of each object identified by a classification model based on analysis of the panorama (step 803). As an example, for a given object, the inspection platform may identify its boundaries based on the output produced by the classification model and then compute, infer, or otherwise establish its dimensions based on the boundaries. The dimensions could be determined in an absolute sense (e.g., based solely on analysis of its boundaries), or the dimensions could be determined in a relative sense (e.g., by considering its boundaries in the context of junctures of the physical space, other objects in the physical space, etc.).
  • The inspection platform can encode information regarding the outputs, if any, produced by the classification model(s) in a data structure that is associated with the physical space (step 804). Step 804 of FIG. 8 may be substantially similar to steps 506 and 706 of FIGS. 5 and 7 , respectively, except that the information pertains to an object rather than the physical space. For example, the inspection platform may encode the dimensions of the object in the data structure. Additionally or alternatively, the inspection platform may encode the boundaries of the bounding box (e.g., in terms of pixel coordinates of the panorama) determined for the object.
  • Note that while the sequences of the steps performed in the processes described herein are exemplary, the steps can be performed in various sequences and combinations. For example, steps could be added to, or removed from, these processes. Similarly, steps could be replaced or reordered. Thus, the descriptions of these processes are intended to be open ended.
  • Additional steps may also be included in some embodiments.
  • For example, when a trained model is applied to a panorama as discussed above with reference to FIGS. 5 and 7 , outputs will be produced that are representative of predicted locations of junctures. However, these predicted locations may be slightly inaccurate depending on the location of the computing device used to capture the panorama, as well as other factors such as lighting, resolution, etc. The inspection platform may attempt to limit the inaccuracy by adjusting the predicted locations. For instance, the inspection platform can be configured to mathematically model the spatial locations of individual junctures (and between pairs of junctures) based on the panorama and/or outputs generated by other sensors included in the computing device.
  • As another example, when a classification model is applied to a panorama as discussed above with reference to FIG. 8 , an output may be produced that identifies an instance of a given type of object. In such a scenario, information regarding the object may be presented on an interface for review by the user. Thus, the user may be able to confirm or deny the instance of the object as identified by the classification model. Additionally or alternatively, the inspection platform may be able to prompt or permit the user to provide information regarding the instance of the object. Assume, for example, that the inspection platform applies, to a panorama, a classification model that identifies an instance of a stove. In this situation, the inspection platform may proactively ask the user to provide additional information regarding the stove (e.g., fuel type and age).
  • Processing System
  • FIG. 9 is a block diagram illustrating an example of a processing system 900 in which at least some operations described herein can be implemented. For example, components of the processing system 900 may be hosted on a computing device that includes an inspection platform, or components of the processing system 900 may be hosted on a computing device with which a panorama of a physical space is captured.
  • The processing system 900 may include a central processing unit (“processor”) 902, main memory 906, non-volatile memory 910, network adapter 912, video display 918, input/output devices 920, control device 922 (e.g., a keyboard or pointing device), drive unit 924 including a storage medium 926, and signal generation device 930 that are communicatively connected to a bus 916. The bus 916 is illustrated as an abstraction that represents one or more physical buses or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. The bus 916, therefore, can include a system bus, a Peripheral Component Interconnect (PCI) bus or PCI-Express bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), inter-integrated circuit (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (also referred to as “Firewire”).
  • While the main memory 906, non-volatile memory 910, and storage medium 926 are shown to be a single medium, the terms “machine-readable medium” and “storage medium” should be taken to include a single medium or multiple media (e.g., a centralized/distributed database and/or associated caches and servers) that store one or more sets of instructions 928. The terms “machine-readable medium” and “storage medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the processing system 900.
  • In general, the routines executed to implement the embodiments of the disclosure may be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g., instructions 904, 908, 928) set at various times in various memory and storage devices in a computing device. When read and executed by the processors 902, the instruction(s) cause the processing system 900 to perform operations to execute elements involving the various aspects of the present disclosure.
  • Further examples of machine- and computer-readable media include recordable-type media, such as volatile memory devices and non-volatile memory devices 910, removable disks, hard disk drives, and optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMS) and Digital Versatile Disks (DVDs)), and transmission-type media, such as digital and analog communication links.
  • The network adapter 912 enables the processing system 900 to mediate data in a network 914 with an entity that is external to the processing system 900 through any communication protocol supported by the processing system 900 and the external entity. The network adapter 912 can include a network adaptor card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, bridge router, a hub, a digital media receiver, a repeater, or any combination thereof.
  • Remarks
  • The foregoing description of various embodiments of the claimed subject matter has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed. Many modifications and variations will be apparent to one skilled in the art. Embodiments were chosen and described in order to best describe the principles of the invention and its practical applications, thereby enabling those skilled in the relevant art to understand the claimed subject matter, the various embodiments, and the various modifications that are suited to the particular uses contemplated.
  • Although the Detailed Description describes certain embodiments and the best mode contemplated, the technology can be practiced in many ways no matter how detailed the Detailed Description appears. Embodiments may vary considerably in their implementation details, while still being encompassed by the specification. Particular terminology used when describing certain features or aspects of various embodiments should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific embodiments disclosed in the specification, unless those terms are explicitly defined herein. Accordingly, the actual scope of the technology encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the embodiments.
  • The language used in the specification has been principally selected for readability and instructional purposes. It may not have been selected to delineate or circumscribe the subject matter. It is therefore intended that the scope of the technology be limited not by this Detailed Description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of various embodiments is intended to be illustrative, but not limiting, of the scope of the technology as set forth in the following claims.

Claims (20)

What is claimed is:
1. A non-transitory medium with instructions stored thereon that, when executed by a processor of a computing device, cause the computing device to perform operations comprising:
receiving input that is indicative of a request to establish a layout of an interior space that includes a series of junctures, each of which represents a point at which a different pair of vertically planar surfaces are joined;
instructing a user to generate a panorama image by panning a camera across the interior space, so as to image the interior space in 360 degrees along a horizontal plane;
acquiring, by the camera, inertial data and the panorama image of the interior space;
generating an equirectangular panorama image by temporally aligning the inertial data with image data associated with the panorama image;
applying one or more trained classification models to the equirectangular panorama image to generate visual indicia to indicate an abnormal appearance of a structure in the interior space, wherein the abnormal appearance of the structure comprises one or more risks or hazards;
establishing the layout of the interior space; and
encoding information regarding the layout in a data structure that is associated with the interior space.
2. The non-transitory medium of claim 1, further comprising:
causing a capture session to be customized by configuring a capture parameter of the camera, wherein the capture parameter comprises camera resolution, focus, and flash; and
acquiring the inertial data and the panorama image of the interior space according to the customized capture session.
3. The non-transitory medium of claim 1, further comprising applying a trained model to the equirectangular panorama image to produce a series of outputs, wherein each output in the series of outputs is representative of a corresponding juncture that is predicted by the trained model based on an analysis of the equirectangular panorama image.
4. The non-transitory medium of claim 3, wherein establishing the layout of the interior space comprises:
predicting, for each output in the series of outputs, a spatial position of the corresponding juncture in a context of a coordinate system; and
calculating dimensions of the interior space based on the spatial positions predicted for the series of outputs.
5. The non-transitory medium of claim 4, wherein the establishing the layout of the interior space comprises creating a three-dimensional (3D) floor plan for the interior space that is based on the dimensions.
6. The non-transitory medium of claim 1, wherein each juncture in the series of junctures represents a floor-wall boundary at which a floor and a wall join, a ceiling-wall boundary at which a ceiling and a wall join, or a wall-wall boundary at which a pair of walls join.
7. The non-transitory medium of claim 1, wherein the camera is contained in the computing device, and wherein the operations further comprise:
in response to receiving the input, configuring a capture session so that one or more of the panorama image and the equirectangular panorama image is made available by an operating system of the computing device after being generated by the camera.
8. The non-transitory medium of claim 7, wherein the panorama image is representative of a series of frames with overlapping portions that are joined together by the operating system of the computing device to collectively represent the interior space.
9. The non-transitory medium of claim 1, wherein the one or more trained classification models perform pixel-wise classification of pixel data corresponding to one or more of the panorama image and the equirectangular panorama image.
10. The non-transitory medium of claim 1, wherein the operations further comprise:
causing a capture session to be customized by configuring a capture parameter of the camera based on a characteristic of the interior space.
11. The non-transitory medium of claim 1, wherein the operations further comprise:
causing display of the panorama image on an interface; and
indicating a location of each juncture that is predicted by the one or more trained classification models by overlaying at least one bounding box on the panorama image, wherein a perimeter of each bounding box is defined by a set of four outputs, and wherein the set of four outputs includes a first output representing a floor-wall boundary, a second output representing a ceiling-wall boundary, and third and fourth outputs representing different wall-wall boundaries.
12. The non-transitory medium of claim 11, wherein causing the display of the panorama image on an interface; and indicating a location of each juncture are performed in near real time as the camera is panned across the interior space.
13. A method implemented by a computer program executing on a computing device, the method comprising:
receiving input that is indicative of a request to establish a layout of an interior space that includes a series of junctures, each of which represents a point at which a different pair of surfaces are joined;
instructing a user to generate a panorama image by panning a camera around the interior space;
acquiring, by the camera, inertial data and at least two images of the interior space;
generating an equirectangular panorama image by temporally aligning the inertial data with the at least two images;
applying one or more trained classification models to the equirectangular panorama image to generate visual indicia to indicate an abnormal appearance of a structure in the interior space, wherein the abnormal appearance of the structure comprises one or more risks or hazards; and
causing display of at least a portion of the equirectangular panorama image with graphical elements overlaid thereon.
14. The method of claim 13, further comprising:
causing a capture session to be customized by configuring a capture parameter of the camera, wherein the capture parameter comprises camera resolution, focus, and flash; and
acquiring the inertial data and the panorama image of the interior space according to the customized capture session.
15. The method of claim 13, further comprising applying a trained model to the equirectangular panorama image to produce a series of outputs, wherein each output in the series of outputs is representative of a corresponding juncture that is predicted by the trained model based on an analysis of the equirectangular panorama image, wherein establishing the layout of the interior space comprises:
predicting, for each output in the series of outputs, a spatial position of the corresponding juncture in a context of a coordinate system, and
calculating dimensions of the interior space based on the spatial positions predicted for the series of outputs.
16. The method of claim 13, wherein each juncture in the series of junctures represents a floor-wall boundary at which a floor and a wall join, a ceiling-wall boundary at which a ceiling and a wall join, or a wall-wall boundary at which a pair of walls join.
17. The method of claim 13, wherein the panorama image is representative of a series of frames with overlapping portions that are joined together by an operating system of the computing device to collectively represent the interior space.
18. The method of claim 13, wherein the one or more trained classification models perform pixel-wise classification of pixel data corresponding to one or more of the panorama image and the equirectangular panorama image.
19. The method of claim 13, further comprising:
causing display of the panorama image on an interface; and
indicating a location of each juncture that is predicted by the one or more trained classification models by overlaying at least one bounding box on the panorama image, wherein a perimeter of each bounding box is defined by a set of four outputs, and wherein the set of four outputs includes a first output representing a floor-wall boundary, a second output representing a ceiling-wall boundary, and third and fourth outputs representing different wall-wall boundaries.
20. The method of claim 19, wherein causing the display of the panorama image on an interface; and indicating a location of each juncture are performed in near real time as the camera is panned across the interior space.
US18/307,270 2020-10-13 2023-04-26 Generating measurements of physical structures and environments through automated analysis of sensor data Active US11960799B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/307,270 US11960799B2 (en) 2020-10-13 2023-04-26 Generating measurements of physical structures and environments through automated analysis of sensor data

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202063091149P 2020-10-13 2020-10-13
US17/500,128 US11699001B2 (en) 2020-10-13 2021-10-13 Generating measurements of physical structures and environments through automated analysis of sensor data
US18/307,270 US11960799B2 (en) 2020-10-13 2023-04-26 Generating measurements of physical structures and environments through automated analysis of sensor data

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/500,128 Continuation US11699001B2 (en) 2020-10-13 2021-10-13 Generating measurements of physical structures and environments through automated analysis of sensor data

Publications (2)

Publication Number Publication Date
US20230259667A1 true US20230259667A1 (en) 2023-08-17
US11960799B2 US11960799B2 (en) 2024-04-16

Family

ID=81079292

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/500,128 Active US11699001B2 (en) 2020-10-13 2021-10-13 Generating measurements of physical structures and environments through automated analysis of sensor data
US18/307,270 Active US11960799B2 (en) 2020-10-13 2023-04-26 Generating measurements of physical structures and environments through automated analysis of sensor data

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/500,128 Active US11699001B2 (en) 2020-10-13 2021-10-13 Generating measurements of physical structures and environments through automated analysis of sensor data

Country Status (4)

Country Link
US (2) US11699001B2 (en)
EP (1) EP4229552A4 (en)
CN (2) CN117744196A (en)
WO (1) WO2022081717A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11769245B2 (en) * 2021-10-21 2023-09-26 Goodrich Corporation Systems and methods of monitoring cargo load systems for damage detection

Citations (199)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6657667B1 (en) * 1997-11-25 2003-12-02 Flashpoint Technology, Inc. Method and apparatus for capturing a multidimensional array of overlapping images for composite image generation
US20070030341A1 (en) * 2005-08-03 2007-02-08 Sony Corporation Imaging system, camera control apparatus, panorama image generation method and program therefor
US20070081081A1 (en) * 2005-10-07 2007-04-12 Cheng Brett A Automated multi-frame image capture for panorama stitching using motion sensor
US20090147071A1 (en) * 2007-11-16 2009-06-11 Tenebraex Corporation Systems and methods of creating a virtual window
US20100054628A1 (en) * 2008-08-28 2010-03-04 Zoran Corporation Robust fast panorama stitching in mobile phones or cameras
US20110181690A1 (en) * 2010-01-26 2011-07-28 Sony Corporation Imaging control apparatus, imaging apparatus, imaging control method, and program
US20110181687A1 (en) * 2010-01-26 2011-07-28 Sony Corporation Imaging control apparatus, imaging control method, and program
US20120154442A1 (en) * 2010-12-21 2012-06-21 Sony Corporation Display control device, display control method, and program
US8217956B1 (en) * 2008-02-29 2012-07-10 Adobe Systems Incorporated Method and apparatus for rendering spherical panoramas
US20130155058A1 (en) * 2011-12-14 2013-06-20 The Board Of Trustees Of The University Of Illinois Four-dimensional augmented reality models for interactive visualization and automated construction progress monitoring
US20130156297A1 (en) * 2011-12-15 2013-06-20 Microsoft Corporation Learning Image Processing Tasks from Scene Reconstructions
US20130226515A1 (en) * 2012-02-03 2013-08-29 Eagle View Technologies, Inc. Systems and methods for estimation of building wall area and producing a wall estimation report
US8570329B1 (en) * 2012-10-31 2013-10-29 Google Inc. Subtle camera motions to indicate imagery type in a mapping system
US20140132788A1 (en) * 2012-11-09 2014-05-15 Sean Geoffrey Ramsay Systems and Methods for Generating Spherical Images
US20140160234A1 (en) * 2012-12-10 2014-06-12 Samsung Electronics Co., Ltd. Photographing apparatus
US20140333615A1 (en) * 2013-05-11 2014-11-13 Mitsubishi Electric Research Laboratories, Inc. Method For Reconstructing 3D Scenes From 2D Images
US20140362176A1 (en) * 2013-01-05 2014-12-11 Patrick A. St. Clair Spherical panoramic imaging system
US20150207988A1 (en) * 2014-01-23 2015-07-23 Nvidia Corporation Interactive panoramic photography based on combined visual and inertial orientation tracking
US20150304652A1 (en) * 2014-04-17 2015-10-22 Nokia Technologies Oy Device orientation correction method for panorama images
US20150304576A1 (en) * 2012-11-21 2015-10-22 Thales Method of panoramic 3d mosaicing of a scene
US20150312478A1 (en) * 2012-11-27 2015-10-29 Fotonation Limited Digital Image Capture Device Having A Panorama Mode
US20150373266A1 (en) * 2014-06-19 2015-12-24 Omnivision Technologies, Inc. 360 degree multi-camera system
US20160005211A1 (en) * 2014-07-01 2016-01-07 Qualcomm Incorporated System and method of three-dimensional model generation
US20160088287A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Image stitching for three-dimensional video
US20160086379A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Interaction with three-dimensional video
US20160104510A1 (en) * 2013-05-26 2016-04-14 Pixellot Ltd. Method and system for low cost television production
US20160217225A1 (en) * 2015-01-28 2016-07-28 Matterport, Inc. Classifying, separating and displaying individual stories of a three-dimensional model of a multi-story structure based on captured image data of the multi-story structure
US20160360104A1 (en) * 2015-06-02 2016-12-08 Qualcomm Incorporated Systems and methods for producing a combined view from fisheye cameras
US9521321B1 (en) * 2015-02-11 2016-12-13 360 Lab Llc. Enabling manually triggered multiple field of view image capture within a surround image mode for multi-lens mobile devices
US20170132835A1 (en) * 2013-07-23 2017-05-11 Hover Inc. 3d building analyzer
US20170169620A1 (en) * 2015-12-15 2017-06-15 Intel Corporation Generation of synthetic 3-dimensional object images for recognition systems
US20170180680A1 (en) * 2015-12-21 2017-06-22 Hai Yu Object following view presentation method and system
US20170278308A1 (en) * 2016-03-23 2017-09-28 Intel Corporation Image modification and enhancement using 3-dimensional object model based recognition
US20170352191A1 (en) * 2016-06-07 2017-12-07 Visbit Inc. Virtual Reality 360-Degree Video Camera System for Live Streaming
US20180027178A1 (en) * 2016-07-19 2018-01-25 Gopro, Inc. Mapping of spherical image data into rectangular faces for transport and decoding across networks
US20180089763A1 (en) * 2016-09-23 2018-03-29 Aon Benfield Inc. Platform, Systems, and Methods for Identifying Property Characteristics and Property Feature Maintenance Through Aerial Imagery Analysis
US20180122042A1 (en) * 2016-10-31 2018-05-03 Adobe Systems Incorporated Utilizing an inertial measurement device to adjust orientation of panorama digital images
US20180137642A1 (en) * 2016-11-15 2018-05-17 Magic Leap, Inc. Deep learning system for cuboid detection
US20180139431A1 (en) * 2012-02-24 2018-05-17 Matterport, Inc. Capturing and aligning panoramic image and depth data
US20180144555A1 (en) * 2015-12-08 2018-05-24 Matterport, Inc. Determining and/or generating data for an architectural opening area associated with a captured three-dimensional model
US20180143756A1 (en) * 2012-06-22 2018-05-24 Matterport, Inc. Defining, displaying and interacting with tags in a three-dimensional model
US20180190033A1 (en) * 2016-12-30 2018-07-05 Facebook, Inc. Systems and methods for providing augmented reality effects and three-dimensional mapping associated with interior spaces
US10021295B1 (en) * 2013-06-03 2018-07-10 Amazon Technologies, Inc. Visual cues for managing image capture
US10033928B1 (en) * 2015-10-29 2018-07-24 Gopro, Inc. Apparatus and methods for rolling shutter compensation for multi-camera systems
US20180225870A1 (en) * 2015-05-29 2018-08-09 Hover Inc. Image capture for a multi-dimensional building model
US20180225393A1 (en) * 2014-05-13 2018-08-09 Atheer, Inc. Method for forming walls to align 3d objects in 2d environment
US20180232471A1 (en) * 2017-02-16 2018-08-16 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for acoustic classification and optimization for multi-modal rendering of real-world scenes
US20180247132A1 (en) * 2017-02-28 2018-08-30 Microsoft Technology Licensing, Llc System and method for person counting in image data
US20180262683A1 (en) * 2017-03-10 2018-09-13 Gopro, Inc. Image Quality Assessment
US20180268220A1 (en) * 2017-03-17 2018-09-20 Magic Leap, Inc. Room layout estimation methods and techniques
US20180302614A1 (en) * 2017-04-13 2018-10-18 Facebook, Inc. Panoramic camera systems
US20180315162A1 (en) * 2017-04-28 2018-11-01 Google Inc. Extracting 2d floor plan from 3d grid representation of interior space
US20180332205A1 (en) * 2017-05-12 2018-11-15 Microsoft Technology Licensing, Llc Image capture using a hinged device with multiple cameras
US10133933B1 (en) * 2017-08-07 2018-11-20 Standard Cognition, Corp Item put and take detection using image recognition
US20180338084A1 (en) * 2017-05-16 2018-11-22 Axis Ab System comprising a video camera and a client device and a method performed by the same
US20180374192A1 (en) * 2015-12-29 2018-12-27 Dolby Laboratories Licensing Corporation Viewport Independent Image Coding and Rendering
US20190007590A1 (en) * 2017-05-25 2019-01-03 Eys3D Microelectronics, Co. Image processor and related image system
US20190020817A1 (en) * 2017-07-13 2019-01-17 Zillow Group, Inc. Connecting and using building interior data acquired from mobile devices
US20190026958A1 (en) * 2012-02-24 2019-01-24 Matterport, Inc. Employing three-dimensional (3d) data predicted from two-dimensional (2d) images using neural networks for 3d modeling applications and other applications
US20190051054A1 (en) * 2017-08-08 2019-02-14 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US20190058811A1 (en) * 2017-08-21 2019-02-21 Gopro, Inc. Image stitching with electronic rolling shutter correction
US20190124749A1 (en) * 2016-04-06 2019-04-25 Philips Lighting Holding B.V. Controlling a lighting system
US10321109B1 (en) * 2017-06-13 2019-06-11 Vulcan Inc. Large volume video data transfer over limited capacity bus
US10319150B1 (en) * 2017-05-15 2019-06-11 A9.Com, Inc. Object preview in a mixed reality environment
US20190205485A1 (en) * 2017-12-28 2019-07-04 Dassault Systemes Generating 3d models representing buildings
US20190243928A1 (en) * 2017-12-28 2019-08-08 Dassault Systemes Semantic segmentation of 2d floor plans with a pixel-wise classifier
US20190251352A1 (en) * 2018-02-09 2019-08-15 Matterport, Inc. Selecting exterior images of a structure based on capture positions of indoor images associated with the structure
US10395147B2 (en) * 2017-10-30 2019-08-27 Rakuten, Inc. Method and apparatus for improved segmentation and recognition of images
US20190266293A1 (en) * 2016-11-17 2019-08-29 Lifull Co., Ltd. Information processing apparatus, information processing method, and program
US20190266793A1 (en) * 2018-02-23 2019-08-29 Lowe's Companies, Inc. Apparatus, systems, and methods for tagging building features in a 3d space
US20190303512A1 (en) * 2018-03-29 2019-10-03 Aecom Digital design tools for building construction
US20190318538A1 (en) * 2018-04-11 2019-10-17 Zillow Group, Inc. Presenting image transition sequences between viewing locations
US20190325628A1 (en) * 2018-04-23 2019-10-24 Accenture Global Solutions Limited Ai-driven design platform
US20190332866A1 (en) * 2018-04-26 2019-10-31 Fyusion, Inc. Method and apparatus for 3-d auto tagging
US20190335120A1 (en) * 2018-04-26 2019-10-31 Canon Kabushiki Kaisha Image pickup apparatus, image pickup method, and storage medium
US20190340835A1 (en) * 2018-05-04 2019-11-07 Signaturize Holdings Ltd Generating Virtual Representations
US20190340814A1 (en) * 2018-05-04 2019-11-07 Signaturize Holdings Ltd Generating Virtual Representations
US20190349598A1 (en) * 2017-01-03 2019-11-14 Nokia Technologies Oy An Apparatus, a Method and a Computer Program for Video Coding and Decoding
US20190379856A1 (en) * 2018-06-08 2019-12-12 Lg Electronics Inc. Method for processing overlay in 360-degree video system and apparatus for the same
US20190385363A1 (en) * 2018-06-15 2019-12-19 Geomni, Inc. Computer Vision Systems and Methods for Modeling Roofs of Structures Using Two-Dimensional and Partial Three-Dimensional Data
US20190387165A1 (en) * 2018-06-07 2019-12-19 Eys3D Microelectronics, Co. Image device for generating depth images and related electronic device
US20190392630A1 (en) * 2018-06-20 2019-12-26 Google Llc Automated understanding of three dimensional (3d) scenes for augmented reality applications
US20200007841A1 (en) * 2018-06-28 2020-01-02 EyeSpy360 Limited Transforming Locations in a Spherical Image Viewer
US20200005428A1 (en) * 2018-06-28 2020-01-02 EyeSpy360 Limited Creating a Floor Plan from Images in Spherical Format
US20200008024A1 (en) * 2018-06-27 2020-01-02 Niantic, Inc. Multi-Sync Ensemble Model for Device Localization
US20200036955A1 (en) * 2017-03-22 2020-01-30 Nokia Technologies Oy A method and an apparatus and a computer program product for adaptive streaming
US10554896B2 (en) * 2016-05-04 2020-02-04 Insidemaps, Inc. Stereoscopic imaging using mobile computing devices having front-facing and rear-facing cameras
US20200043186A1 (en) * 2017-01-27 2020-02-06 Ucl Business Plc Apparatus, method, and system for alignment of 3d datasets
US20200051338A1 (en) * 2017-12-22 2020-02-13 Khurram Mahmood Zia Techniques for crowdsourcing a room design, using augmented reality
US20200057824A1 (en) * 2018-08-20 2020-02-20 Sri International Machine learning system for building renderings and building information modeling data
US20200074739A1 (en) * 2018-05-31 2020-03-05 Jido Inc. Method for establishing a common reference frame amongst devices for an augmented reality session
US20200090417A1 (en) * 2018-09-18 2020-03-19 AB Strategies SEZC Ltd. Fixing holes in a computer generated model of a real-world environment
US10607405B2 (en) * 2016-05-27 2020-03-31 Rakuten, Inc. 3D model generating system, 3D model generating method, and program
US20200116493A1 (en) * 2018-10-11 2020-04-16 Zillow Group, Inc. Automated Mapping Information Generation From Inter-Connected Images
US20200175753A1 (en) * 2018-11-30 2020-06-04 Cupix, Inc. Apparatus and method of three-dimensional reverse modeling of building structure by using photographic images
US20200211284A1 (en) * 2018-12-28 2020-07-02 National Tsing Hua University Indoor scene structural estimation system and estimation method thereof based on deep learning network
US10708507B1 (en) * 2018-10-11 2020-07-07 Zillow Group, Inc. Automated control of image acquisition via use of acquisition device sensors
US20200258144A1 (en) * 2019-02-11 2020-08-13 A9.Com, Inc. Curated environments for augmented reality applications
US20200296350A1 (en) * 2018-07-13 2020-09-17 Lg Electronics Inc. Method and device for transmitting and receiving metadata on coordinate system of dynamic viewpoint
US20200302686A1 (en) * 2019-03-18 2020-09-24 Geomagical Labs, Inc. System and method for virtual modeling of indoor scenes from imagery
US20200302681A1 (en) * 2019-03-18 2020-09-24 Geomagical Labs, Inc. Virtual interaction with three-dimensional indoor room imagery
US20200311468A1 (en) * 2019-03-29 2020-10-01 Fuji Xerox Co., Ltd. Indoor localization using real-time context fusion of visual information from static and dynamic cameras
US20200312013A1 (en) * 2019-03-29 2020-10-01 Airbnb, Inc. Generating two-dimensional plan from three-dimensional image data
US20200327262A1 (en) * 2019-04-15 2020-10-15 Armstrong World Industries, Inc. Systems and methods of predicting architectural materials within a space
US20200334833A1 (en) * 2019-04-16 2020-10-22 At&T Intellectual Property I, L.P. Selecting viewpoints for rendering in volumetric video presentations
US10825247B1 (en) * 2019-11-12 2020-11-03 Zillow Group, Inc. Presenting integrated building information using three-dimensional building models
US10832437B2 (en) * 2018-09-05 2020-11-10 Rakuten, Inc. Method and apparatus for assigning image location and direction to a floorplan diagram based on artificial intelligence
US20200387788A1 (en) * 2019-06-06 2020-12-10 Bluebeam, Inc. Methods and systems for automatically detecting design elements in a two-dimensional design document
US20200394849A1 (en) * 2019-06-12 2020-12-17 Jeremiah Timberline Barker Color and texture rendering for application in a three-dimensional model of a space
US20210004190A1 (en) * 2019-07-02 2021-01-07 Parsempo Ltd. Digital display set-up
US20210004933A1 (en) * 2019-07-01 2021-01-07 Geomagical Labs, Inc. Method and system for image generation
US20210019453A1 (en) * 2019-07-15 2021-01-21 Ke.Com (Beijing) Technology Co., Ltd. Artificial intelligence systems and methods for interior design
US20210058731A1 (en) * 2018-05-11 2021-02-25 Clepseadra, Inc. Acoustic program, acoustic device, and acoustic system
US10937211B2 (en) * 2018-11-09 2021-03-02 Autodesk, Inc. Automated parametrization of floor-plan sketches for multi-objective building optimization tasks
US20210064216A1 (en) * 2019-08-28 2021-03-04 Zillow Group, Inc. Automated Tools For Generating Mapping Information For Buildings
US20210073449A1 (en) * 2019-09-06 2021-03-11 BeamUp, Ltd. Structural design systems and methods for floor plan simulation and modeling in mass customization of equipment
US20210104093A1 (en) * 2019-10-07 2021-04-08 Zillow Group, Inc. Providing Simulated Lighting Information For Three-Dimensional Building Models
US20210117582A1 (en) * 2019-10-16 2021-04-22 Select Interior Concepts, Inc. Visualizing Building Interior Information In A User-Customized Manner
US20210117583A1 (en) * 2019-10-18 2021-04-22 Pictometry International Corp. Systems for the classification of interior structure areas based on exterior images
US20210118165A1 (en) * 2019-10-18 2021-04-22 Pictometry International Corp. Geospatial object geometry extraction from imagery
US20210127060A1 (en) * 2019-10-25 2021-04-29 Alibaba Group Holding Limited Method for wall line determination, method, apparatus, and device for spatial modeling
US20210125397A1 (en) * 2019-10-28 2021-04-29 Zillow Group, Inc. Generating Floor Maps For Buildings From Automated Analysis Of Visual Data Of The Buildings' Interiors
US11006041B1 (en) * 2020-05-07 2021-05-11 Qualcomm Incorporated Multiple camera system for wide angle imaging
US20210150805A1 (en) * 2019-11-14 2021-05-20 Qualcomm Incorporated Layout estimation using planes
US20210150088A1 (en) * 2019-11-18 2021-05-20 Autodesk, Inc. Building information model (bim) element extraction from floor plan drawings using machine learning
US20210176388A1 (en) * 2018-06-26 2021-06-10 Gopro, Inc. Entropy maximization based auto-exposure
US20210183128A1 (en) * 2014-02-20 2021-06-17 William Ernest Miller Method and system for construction project management using photo imaging measurements
US11043026B1 (en) * 2017-01-28 2021-06-22 Pointivo, Inc. Systems and methods for processing 2D/3D data for structures of interest in a scene and wireframes generated therefrom
US20210192748A1 (en) * 2019-12-18 2021-06-24 Zoox, Inc. Prediction on top-down scenes based on object motion
US20210199809A1 (en) * 2019-12-30 2021-07-01 Matterport, Inc. System and method of capturing and generating panoramic three-dimensional images
US20210225090A1 (en) * 2020-01-17 2021-07-22 Apple Inc Floorplan generation based on room scanning
US11094135B1 (en) * 2021-03-05 2021-08-17 Flyreel, Inc. Automated measurement of interior spaces through guided modeling of dimensions
US20210256177A1 (en) * 2020-02-14 2021-08-19 Pic2Sketch System and method for creating a 2D floor plan using 3D pictures
US20210272308A1 (en) * 2020-02-27 2021-09-02 Dell Products L.P. Automated capacity management using artificial intelligence techniques
US20210272358A1 (en) * 2020-02-28 2021-09-02 Aurora Solar Inc. Automated three-dimensional building model estimation
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
US20210303757A1 (en) * 2020-03-30 2021-09-30 Kla Corporation Semiconductor fabrication process parameter determination using a generative adversarial network
US20210312702A1 (en) * 2019-01-22 2021-10-07 Fyusion, Inc. Damage detection from multi-view visual data
US20210312203A1 (en) * 2020-04-06 2021-10-07 Nvidia Corporation Projecting images captured using fisheye lenses for feature detection in autonomous machine applications
US20210321035A1 (en) * 2018-09-01 2021-10-14 Digital Animal Interactive Inc. Image processing methods and systems
US20210326026A1 (en) * 2020-04-17 2021-10-21 Occipital, Inc. System and user interface for viewing and interacting with three-dimensional scenes
US20210335052A1 (en) * 2018-11-23 2021-10-28 Samsung Electronics Co., Ltd. Electronic device and control method thereof
US20210343073A1 (en) * 2020-04-13 2021-11-04 Charles C. Carrington Georeferencing a generated floorplan and generating structural models
US20210348927A1 (en) * 2020-05-08 2021-11-11 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and recording medium
US20210352222A1 (en) * 2020-05-06 2021-11-11 At&T Intellectual Property I, L.P. System for infinite windows with optical disparity and depth resolution
US20210375062A1 (en) * 2020-05-29 2021-12-02 Open Space Labs, Inc. Machine learning based object identification using scaled diagram and three-dimensional model
US20210383501A1 (en) * 2020-06-09 2021-12-09 Axis Ab Aligning digital images
US20210385378A1 (en) * 2020-06-05 2021-12-09 Zillow, Inc. Automated Generation On Mobile Devices Of Panorama Images For Building Locations And Subsequent Use
US20210383115A1 (en) * 2018-10-09 2021-12-09 Resonai Inc. Systems and methods for 3d scene augmentation and reconstruction
US11200734B2 (en) * 2018-07-03 2021-12-14 Shanghai Yiwo Information Technology Co., Ltd. Method for reconstructing three-dimensional space scene based on photographing
US20220003555A1 (en) * 2018-10-11 2022-01-06 Zillow, Inc. Use Of Automated Mapping Information From Inter-Connected Images
US20220027656A1 (en) * 2020-07-24 2022-01-27 Ricoh Company, Ltd. Image matching method and apparatus and non-transitory computer-readable medium
US11252329B1 (en) * 2021-01-08 2022-02-15 Zillow, Inc. Automated determination of image acquisition locations in building interiors using multiple data capture devices
US20220076019A1 (en) * 2020-09-04 2022-03-10 Zillow, Inc. Automated Analysis Of Image Contents To Determine The Acquisition Location Of The Image
US11282268B1 (en) * 2021-10-27 2022-03-22 Flyreel, Inc. Top-down view mapping of interior spaces
US20220092227A1 (en) * 2020-09-22 2022-03-24 Zillow, Inc. Automated Identification And Use Of Building Floor Plan Information
US20220107977A1 (en) * 2020-10-05 2022-04-07 Modern Adjusting Services, LLC Methods, systems, and software for inspection of a structure
US20220108044A1 (en) * 2020-10-02 2022-04-07 OPR General Contracting, Inc. Kitchen Renovation System
US20220114291A1 (en) * 2020-10-13 2022-04-14 Zillow, Inc. Automated Tools For Generating Building Mapping Information
US20220124298A1 (en) * 2020-10-21 2022-04-21 Coretronic Corporation Method and electronic apparatus for stitching three-dimensional spherical panorama
US20220147662A1 (en) * 2013-07-23 2022-05-12 Hover Inc. 3d building analyzer
US20220148327A1 (en) * 2020-11-10 2022-05-12 Autodesk, Inc. Machine learning techniques for extracting floorplan elements from architectural drawings
US20220156426A1 (en) * 2020-11-13 2022-05-19 Qualcomm Technologies, Inc. Scene layout estimation
US20220164493A1 (en) * 2019-08-28 2022-05-26 Zillow, Inc. Automated Tools For Generating Mapping Information For Buildings
US20220180595A1 (en) * 2020-06-12 2022-06-09 Boom Interactive, Inc. System and method for creating three-dimensional renderings of environments from two-dimensional images
US20220189122A1 (en) * 2019-11-12 2022-06-16 Zillow, Inc. Presenting Building Information Using Building Models
US20220198709A1 (en) * 2019-04-02 2022-06-23 Buildots Ltd. Determining position of an image capture device
US20220224833A1 (en) * 2021-01-08 2022-07-14 Zillow, Inc. Automated Determination Of Image Acquisition Locations In Building Interiors Using Multiple Data Capture Devices
US11410362B1 (en) * 2021-10-12 2022-08-09 Procore Technologies, Inc. Automatic area detection
US20220256076A1 (en) * 2016-05-25 2022-08-11 Gopro, Inc. Three-dimensional noise reduction
US20220269888A1 (en) * 2021-02-25 2022-08-25 Zillow, Inc. Automated Usability Assessment Of Buildings Using Visual Data Of Captured In-Room Images
US20220269885A1 (en) * 2021-02-25 2022-08-25 Zillow, Inc. Automated Direction Of Capturing In-Room Information For Use In Usability Assessment Of Buildings
US20220284146A1 (en) * 2021-03-05 2022-09-08 Flyreel, Inc. Semi-supervised 3d indoor layout estimation from a single 360 degree panorama
US20220287530A1 (en) * 2021-03-15 2022-09-15 Midea Group Co., Ltd. Method and Apparatus for Localizing Mobile Robot in Environment
US20220292421A1 (en) * 2021-03-09 2022-09-15 Patrick E Murphy Methods and apparatus for artificial intelligence conversion of change orders into an actionable interface
US20220292289A1 (en) * 2021-03-11 2022-09-15 GM Global Technology Operations LLC Systems and methods for depth estimation in a vehicle
US20220300669A1 (en) * 2020-06-05 2022-09-22 Hangzhou Qunhe Information Technology Co., Ltd. An auxiliary method for graphic home improvement design
US11494857B2 (en) * 2018-02-15 2022-11-08 Flyreel, Inc. Property inspection system and method
US20220375170A1 (en) * 2021-05-21 2022-11-24 Occipital, Inc. System for generation of floor plans and three-dimensional models
US20220383027A1 (en) * 2021-05-28 2022-12-01 Verizon Patent And Licensing Inc. Methods and Systems for Augmented Reality Room Identification Based on Room-Object Profile Data
US20220391627A1 (en) * 2021-06-01 2022-12-08 Buildingestimates.Com Limited Rapid and accurate modeling of a building construction structure including estimates, detailing, and take-offs using artificial intelligence
US20220406007A1 (en) * 2021-06-21 2022-12-22 The Travelers Indemnity Company Systems and methods for artificial intelligence (ai) three-dimensional modeling
US20230035601A1 (en) * 2021-07-28 2023-02-02 OPAL AI Inc. Floorplan Generation System And Methods Of Use
US20230051749A1 (en) * 2021-08-12 2023-02-16 Adobe Inc. Generating synthesized digital images utilizing class-specific machine-learning models
US20230093087A1 (en) * 2021-09-17 2023-03-23 Yembo, Inc. Browser optimized interactive electronic model based determination of attributes of a structure
US20230099352A1 (en) * 2020-02-27 2023-03-30 Tailorbird, Inc. Apparatus and method of converting digital images to three-dimensional construction images
US20230095173A1 (en) * 2021-09-22 2023-03-30 MFTB Holdco, Inc. Automated Exchange And Use Of Attribute Information Between Building Images Of Multiple Types
US20230106339A1 (en) * 2021-09-22 2023-04-06 Awe Company Limited 2d and 3d floor plan generation
US20230185978A1 (en) * 2021-12-14 2023-06-15 Buildots Ltd. Interactive gui for presenting construction information at construction projects
US20230184537A1 (en) * 2021-12-10 2023-06-15 Flyreel, Inc. Provoking movement of computing devices with augmented reality features
US20230184949A1 (en) * 2021-12-09 2023-06-15 Robert Bosch Gmbh Learning-based system and method for estimating semantic maps from 2d lidar scans
US20230196684A1 (en) * 2019-11-12 2023-06-22 MFTB Holdco, Inc. Presenting Building Information Using Video And Building Models
US20230206393A1 (en) * 2021-12-28 2023-06-29 MFTB Holdco., Inc. Automated Building Information Determination Using Inter-Image Analysis Of Multiple Building Images
US20230290072A1 (en) * 2022-03-10 2023-09-14 Matterport, Inc. System and method of object detection and interactive 3d models
US20230290132A1 (en) * 2020-07-29 2023-09-14 Magic Leap, Inc. Object recognition neural network training using multiple data sources
US11763478B1 (en) * 2020-01-17 2023-09-19 Apple Inc. Scan-based measurements
US11783560B1 (en) * 2022-06-03 2023-10-10 Amazon Technologies, Inc. Three-dimensional room modeling and feature annotation
US20230325547A1 (en) * 2022-04-08 2023-10-12 Honeywell International Inc. Fire system floor plan layout generation
US11830135B1 (en) * 2022-07-13 2023-11-28 MFTB Holdco, Inc. Automated building identification using floor plans and acquired building images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10410350B2 (en) * 2017-10-30 2019-09-10 Rakuten, Inc. Skip architecture neural network machine and method for improved semantic segmentation
US20190303648A1 (en) * 2018-04-02 2019-10-03 QRI Group, LLC Smart surveillance and diagnostic system for oil and gas field surface environment via unmanned aerial vehicle and cloud computation

Patent Citations (201)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6657667B1 (en) * 1997-11-25 2003-12-02 Flashpoint Technology, Inc. Method and apparatus for capturing a multidimensional array of overlapping images for composite image generation
US20070030341A1 (en) * 2005-08-03 2007-02-08 Sony Corporation Imaging system, camera control apparatus, panorama image generation method and program therefor
US20070081081A1 (en) * 2005-10-07 2007-04-12 Cheng Brett A Automated multi-frame image capture for panorama stitching using motion sensor
US20090147071A1 (en) * 2007-11-16 2009-06-11 Tenebraex Corporation Systems and methods of creating a virtual window
US8217956B1 (en) * 2008-02-29 2012-07-10 Adobe Systems Incorporated Method and apparatus for rendering spherical panoramas
US20100054628A1 (en) * 2008-08-28 2010-03-04 Zoran Corporation Robust fast panorama stitching in mobile phones or cameras
US20110181690A1 (en) * 2010-01-26 2011-07-28 Sony Corporation Imaging control apparatus, imaging apparatus, imaging control method, and program
US20110181687A1 (en) * 2010-01-26 2011-07-28 Sony Corporation Imaging control apparatus, imaging control method, and program
US20120154442A1 (en) * 2010-12-21 2012-06-21 Sony Corporation Display control device, display control method, and program
US20130155058A1 (en) * 2011-12-14 2013-06-20 The Board Of Trustees Of The University Of Illinois Four-dimensional augmented reality models for interactive visualization and automated construction progress monitoring
US20130156297A1 (en) * 2011-12-15 2013-06-20 Microsoft Corporation Learning Image Processing Tasks from Scene Reconstructions
US20130226515A1 (en) * 2012-02-03 2013-08-29 Eagle View Technologies, Inc. Systems and methods for estimation of building wall area and producing a wall estimation report
US20190026958A1 (en) * 2012-02-24 2019-01-24 Matterport, Inc. Employing three-dimensional (3d) data predicted from two-dimensional (2d) images using neural networks for 3d modeling applications and other applications
US20180139431A1 (en) * 2012-02-24 2018-05-17 Matterport, Inc. Capturing and aligning panoramic image and depth data
US20180143756A1 (en) * 2012-06-22 2018-05-24 Matterport, Inc. Defining, displaying and interacting with tags in a three-dimensional model
US8570329B1 (en) * 2012-10-31 2013-10-29 Google Inc. Subtle camera motions to indicate imagery type in a mapping system
US20140132788A1 (en) * 2012-11-09 2014-05-15 Sean Geoffrey Ramsay Systems and Methods for Generating Spherical Images
US20150304576A1 (en) * 2012-11-21 2015-10-22 Thales Method of panoramic 3d mosaicing of a scene
US20150312478A1 (en) * 2012-11-27 2015-10-29 Fotonation Limited Digital Image Capture Device Having A Panorama Mode
US20140160234A1 (en) * 2012-12-10 2014-06-12 Samsung Electronics Co., Ltd. Photographing apparatus
US20140362176A1 (en) * 2013-01-05 2014-12-11 Patrick A. St. Clair Spherical panoramic imaging system
US20140333615A1 (en) * 2013-05-11 2014-11-13 Mitsubishi Electric Research Laboratories, Inc. Method For Reconstructing 3D Scenes From 2D Images
US20160104510A1 (en) * 2013-05-26 2016-04-14 Pixellot Ltd. Method and system for low cost television production
US10021295B1 (en) * 2013-06-03 2018-07-10 Amazon Technologies, Inc. Visual cues for managing image capture
US20220147662A1 (en) * 2013-07-23 2022-05-12 Hover Inc. 3d building analyzer
US20170132835A1 (en) * 2013-07-23 2017-05-11 Hover Inc. 3d building analyzer
US20150207988A1 (en) * 2014-01-23 2015-07-23 Nvidia Corporation Interactive panoramic photography based on combined visual and inertial orientation tracking
US20210183128A1 (en) * 2014-02-20 2021-06-17 William Ernest Miller Method and system for construction project management using photo imaging measurements
US20150304652A1 (en) * 2014-04-17 2015-10-22 Nokia Technologies Oy Device orientation correction method for panorama images
US20180225393A1 (en) * 2014-05-13 2018-08-09 Atheer, Inc. Method for forming walls to align 3d objects in 2d environment
US20150373266A1 (en) * 2014-06-19 2015-12-24 Omnivision Technologies, Inc. 360 degree multi-camera system
US20160005211A1 (en) * 2014-07-01 2016-01-07 Qualcomm Incorporated System and method of three-dimensional model generation
US20160088287A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Image stitching for three-dimensional video
US20160086379A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Interaction with three-dimensional video
US20160217225A1 (en) * 2015-01-28 2016-07-28 Matterport, Inc. Classifying, separating and displaying individual stories of a three-dimensional model of a multi-story structure based on captured image data of the multi-story structure
US9521321B1 (en) * 2015-02-11 2016-12-13 360 Lab Llc. Enabling manually triggered multiple field of view image capture within a surround image mode for multi-lens mobile devices
US20180225870A1 (en) * 2015-05-29 2018-08-09 Hover Inc. Image capture for a multi-dimensional building model
US20160360104A1 (en) * 2015-06-02 2016-12-08 Qualcomm Incorporated Systems and methods for producing a combined view from fisheye cameras
US20200177780A1 (en) * 2015-10-29 2020-06-04 Gopro, Inc. Apparatus and methods for rolling shutter compensation for multi-camera systems
US10033928B1 (en) * 2015-10-29 2018-07-24 Gopro, Inc. Apparatus and methods for rolling shutter compensation for multi-camera systems
US20180144555A1 (en) * 2015-12-08 2018-05-24 Matterport, Inc. Determining and/or generating data for an architectural opening area associated with a captured three-dimensional model
US20170169620A1 (en) * 2015-12-15 2017-06-15 Intel Corporation Generation of synthetic 3-dimensional object images for recognition systems
US20170180680A1 (en) * 2015-12-21 2017-06-22 Hai Yu Object following view presentation method and system
US20180374192A1 (en) * 2015-12-29 2018-12-27 Dolby Laboratories Licensing Corporation Viewport Independent Image Coding and Rendering
US20170278308A1 (en) * 2016-03-23 2017-09-28 Intel Corporation Image modification and enhancement using 3-dimensional object model based recognition
US20190124749A1 (en) * 2016-04-06 2019-04-25 Philips Lighting Holding B.V. Controlling a lighting system
US10554896B2 (en) * 2016-05-04 2020-02-04 Insidemaps, Inc. Stereoscopic imaging using mobile computing devices having front-facing and rear-facing cameras
US20220256076A1 (en) * 2016-05-25 2022-08-11 Gopro, Inc. Three-dimensional noise reduction
US10607405B2 (en) * 2016-05-27 2020-03-31 Rakuten, Inc. 3D model generating system, 3D model generating method, and program
US20170352191A1 (en) * 2016-06-07 2017-12-07 Visbit Inc. Virtual Reality 360-Degree Video Camera System for Live Streaming
US20180027178A1 (en) * 2016-07-19 2018-01-25 Gopro, Inc. Mapping of spherical image data into rectangular faces for transport and decoding across networks
US20180089763A1 (en) * 2016-09-23 2018-03-29 Aon Benfield Inc. Platform, Systems, and Methods for Identifying Property Characteristics and Property Feature Maintenance Through Aerial Imagery Analysis
US20180122042A1 (en) * 2016-10-31 2018-05-03 Adobe Systems Incorporated Utilizing an inertial measurement device to adjust orientation of panorama digital images
US20180137642A1 (en) * 2016-11-15 2018-05-17 Magic Leap, Inc. Deep learning system for cuboid detection
US20190266293A1 (en) * 2016-11-17 2019-08-29 Lifull Co., Ltd. Information processing apparatus, information processing method, and program
US20180190033A1 (en) * 2016-12-30 2018-07-05 Facebook, Inc. Systems and methods for providing augmented reality effects and three-dimensional mapping associated with interior spaces
US20190349598A1 (en) * 2017-01-03 2019-11-14 Nokia Technologies Oy An Apparatus, a Method and a Computer Program for Video Coding and Decoding
US20200043186A1 (en) * 2017-01-27 2020-02-06 Ucl Business Plc Apparatus, method, and system for alignment of 3d datasets
US11043026B1 (en) * 2017-01-28 2021-06-22 Pointivo, Inc. Systems and methods for processing 2D/3D data for structures of interest in a scene and wireframes generated therefrom
US20180232471A1 (en) * 2017-02-16 2018-08-16 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for acoustic classification and optimization for multi-modal rendering of real-world scenes
US20180247132A1 (en) * 2017-02-28 2018-08-30 Microsoft Technology Licensing, Llc System and method for person counting in image data
US20180262683A1 (en) * 2017-03-10 2018-09-13 Gopro, Inc. Image Quality Assessment
US20180268220A1 (en) * 2017-03-17 2018-09-20 Magic Leap, Inc. Room layout estimation methods and techniques
US20200036955A1 (en) * 2017-03-22 2020-01-30 Nokia Technologies Oy A method and an apparatus and a computer program product for adaptive streaming
US20180302614A1 (en) * 2017-04-13 2018-10-18 Facebook, Inc. Panoramic camera systems
US20180315162A1 (en) * 2017-04-28 2018-11-01 Google Inc. Extracting 2d floor plan from 3d grid representation of interior space
US20180332205A1 (en) * 2017-05-12 2018-11-15 Microsoft Technology Licensing, Llc Image capture using a hinged device with multiple cameras
US10319150B1 (en) * 2017-05-15 2019-06-11 A9.Com, Inc. Object preview in a mixed reality environment
US20180338084A1 (en) * 2017-05-16 2018-11-22 Axis Ab System comprising a video camera and a client device and a method performed by the same
US20190007590A1 (en) * 2017-05-25 2019-01-03 Eys3D Microelectronics, Co. Image processor and related image system
US10321109B1 (en) * 2017-06-13 2019-06-11 Vulcan Inc. Large volume video data transfer over limited capacity bus
US20190020817A1 (en) * 2017-07-13 2019-01-17 Zillow Group, Inc. Connecting and using building interior data acquired from mobile devices
US10133933B1 (en) * 2017-08-07 2018-11-20 Standard Cognition, Corp Item put and take detection using image recognition
US20190051054A1 (en) * 2017-08-08 2019-02-14 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US20190058811A1 (en) * 2017-08-21 2019-02-21 Gopro, Inc. Image stitching with electronic rolling shutter correction
US10395147B2 (en) * 2017-10-30 2019-08-27 Rakuten, Inc. Method and apparatus for improved segmentation and recognition of images
US20200051338A1 (en) * 2017-12-22 2020-02-13 Khurram Mahmood Zia Techniques for crowdsourcing a room design, using augmented reality
US20190205485A1 (en) * 2017-12-28 2019-07-04 Dassault Systemes Generating 3d models representing buildings
US20190243928A1 (en) * 2017-12-28 2019-08-08 Dassault Systemes Semantic segmentation of 2d floor plans with a pixel-wise classifier
US20190251352A1 (en) * 2018-02-09 2019-08-15 Matterport, Inc. Selecting exterior images of a structure based on capture positions of indoor images associated with the structure
US11494857B2 (en) * 2018-02-15 2022-11-08 Flyreel, Inc. Property inspection system and method
US20190266793A1 (en) * 2018-02-23 2019-08-29 Lowe's Companies, Inc. Apparatus, systems, and methods for tagging building features in a 3d space
US20190303512A1 (en) * 2018-03-29 2019-10-03 Aecom Digital design tools for building construction
US20190318538A1 (en) * 2018-04-11 2019-10-17 Zillow Group, Inc. Presenting image transition sequences between viewing locations
US20190325628A1 (en) * 2018-04-23 2019-10-24 Accenture Global Solutions Limited Ai-driven design platform
US20190332866A1 (en) * 2018-04-26 2019-10-31 Fyusion, Inc. Method and apparatus for 3-d auto tagging
US20190335120A1 (en) * 2018-04-26 2019-10-31 Canon Kabushiki Kaisha Image pickup apparatus, image pickup method, and storage medium
US20190340814A1 (en) * 2018-05-04 2019-11-07 Signaturize Holdings Ltd Generating Virtual Representations
US20190340835A1 (en) * 2018-05-04 2019-11-07 Signaturize Holdings Ltd Generating Virtual Representations
US20210058731A1 (en) * 2018-05-11 2021-02-25 Clepseadra, Inc. Acoustic program, acoustic device, and acoustic system
US20200074739A1 (en) * 2018-05-31 2020-03-05 Jido Inc. Method for establishing a common reference frame amongst devices for an augmented reality session
US20190387165A1 (en) * 2018-06-07 2019-12-19 Eys3D Microelectronics, Co. Image device for generating depth images and related electronic device
US20190379856A1 (en) * 2018-06-08 2019-12-12 Lg Electronics Inc. Method for processing overlay in 360-degree video system and apparatus for the same
US20190385363A1 (en) * 2018-06-15 2019-12-19 Geomni, Inc. Computer Vision Systems and Methods for Modeling Roofs of Structures Using Two-Dimensional and Partial Three-Dimensional Data
US20190392630A1 (en) * 2018-06-20 2019-12-26 Google Llc Automated understanding of three dimensional (3d) scenes for augmented reality applications
US20210176388A1 (en) * 2018-06-26 2021-06-10 Gopro, Inc. Entropy maximization based auto-exposure
US20200008024A1 (en) * 2018-06-27 2020-01-02 Niantic, Inc. Multi-Sync Ensemble Model for Device Localization
US20200005428A1 (en) * 2018-06-28 2020-01-02 EyeSpy360 Limited Creating a Floor Plan from Images in Spherical Format
US20200007841A1 (en) * 2018-06-28 2020-01-02 EyeSpy360 Limited Transforming Locations in a Spherical Image Viewer
US11200734B2 (en) * 2018-07-03 2021-12-14 Shanghai Yiwo Information Technology Co., Ltd. Method for reconstructing three-dimensional space scene based on photographing
US20200296350A1 (en) * 2018-07-13 2020-09-17 Lg Electronics Inc. Method and device for transmitting and receiving metadata on coordinate system of dynamic viewpoint
US20200057824A1 (en) * 2018-08-20 2020-02-20 Sri International Machine learning system for building renderings and building information modeling data
US20210321035A1 (en) * 2018-09-01 2021-10-14 Digital Animal Interactive Inc. Image processing methods and systems
US10832437B2 (en) * 2018-09-05 2020-11-10 Rakuten, Inc. Method and apparatus for assigning image location and direction to a floorplan diagram based on artificial intelligence
US20200090417A1 (en) * 2018-09-18 2020-03-19 AB Strategies SEZC Ltd. Fixing holes in a computer generated model of a real-world environment
US20210383115A1 (en) * 2018-10-09 2021-12-09 Resonai Inc. Systems and methods for 3d scene augmentation and reconstruction
US10708507B1 (en) * 2018-10-11 2020-07-07 Zillow Group, Inc. Automated control of image acquisition via use of acquisition device sensors
US20220003555A1 (en) * 2018-10-11 2022-01-06 Zillow, Inc. Use Of Automated Mapping Information From Inter-Connected Images
US20200116493A1 (en) * 2018-10-11 2020-04-16 Zillow Group, Inc. Automated Mapping Information Generation From Inter-Connected Images
US10937211B2 (en) * 2018-11-09 2021-03-02 Autodesk, Inc. Automated parametrization of floor-plan sketches for multi-objective building optimization tasks
US20210335052A1 (en) * 2018-11-23 2021-10-28 Samsung Electronics Co., Ltd. Electronic device and control method thereof
US20200175753A1 (en) * 2018-11-30 2020-06-04 Cupix, Inc. Apparatus and method of three-dimensional reverse modeling of building structure by using photographic images
US20200211284A1 (en) * 2018-12-28 2020-07-02 National Tsing Hua University Indoor scene structural estimation system and estimation method thereof based on deep learning network
US20210312702A1 (en) * 2019-01-22 2021-10-07 Fyusion, Inc. Damage detection from multi-view visual data
US20200258144A1 (en) * 2019-02-11 2020-08-13 A9.Com, Inc. Curated environments for augmented reality applications
US20200302686A1 (en) * 2019-03-18 2020-09-24 Geomagical Labs, Inc. System and method for virtual modeling of indoor scenes from imagery
US20200302681A1 (en) * 2019-03-18 2020-09-24 Geomagical Labs, Inc. Virtual interaction with three-dimensional indoor room imagery
US20200311468A1 (en) * 2019-03-29 2020-10-01 Fuji Xerox Co., Ltd. Indoor localization using real-time context fusion of visual information from static and dynamic cameras
US20200312013A1 (en) * 2019-03-29 2020-10-01 Airbnb, Inc. Generating two-dimensional plan from three-dimensional image data
US20220198709A1 (en) * 2019-04-02 2022-06-23 Buildots Ltd. Determining position of an image capture device
US20200327262A1 (en) * 2019-04-15 2020-10-15 Armstrong World Industries, Inc. Systems and methods of predicting architectural materials within a space
US20200334833A1 (en) * 2019-04-16 2020-10-22 At&T Intellectual Property I, L.P. Selecting viewpoints for rendering in volumetric video presentations
US20200387788A1 (en) * 2019-06-06 2020-12-10 Bluebeam, Inc. Methods and systems for automatically detecting design elements in a two-dimensional design document
US20200394849A1 (en) * 2019-06-12 2020-12-17 Jeremiah Timberline Barker Color and texture rendering for application in a three-dimensional model of a space
US20210004933A1 (en) * 2019-07-01 2021-01-07 Geomagical Labs, Inc. Method and system for image generation
US20210004190A1 (en) * 2019-07-02 2021-01-07 Parsempo Ltd. Digital display set-up
US20210019453A1 (en) * 2019-07-15 2021-01-21 Ke.Com (Beijing) Technology Co., Ltd. Artificial intelligence systems and methods for interior design
US20220164493A1 (en) * 2019-08-28 2022-05-26 Zillow, Inc. Automated Tools For Generating Mapping Information For Buildings
US20210064216A1 (en) * 2019-08-28 2021-03-04 Zillow Group, Inc. Automated Tools For Generating Mapping Information For Buildings
US20210073449A1 (en) * 2019-09-06 2021-03-11 BeamUp, Ltd. Structural design systems and methods for floor plan simulation and modeling in mass customization of equipment
US20210104093A1 (en) * 2019-10-07 2021-04-08 Zillow Group, Inc. Providing Simulated Lighting Information For Three-Dimensional Building Models
US20210117582A1 (en) * 2019-10-16 2021-04-22 Select Interior Concepts, Inc. Visualizing Building Interior Information In A User-Customized Manner
US20210118165A1 (en) * 2019-10-18 2021-04-22 Pictometry International Corp. Geospatial object geometry extraction from imagery
US20210117583A1 (en) * 2019-10-18 2021-04-22 Pictometry International Corp. Systems for the classification of interior structure areas based on exterior images
US20210127060A1 (en) * 2019-10-25 2021-04-29 Alibaba Group Holding Limited Method for wall line determination, method, apparatus, and device for spatial modeling
US20210125397A1 (en) * 2019-10-28 2021-04-29 Zillow Group, Inc. Generating Floor Maps For Buildings From Automated Analysis Of Visual Data Of The Buildings' Interiors
US20220189122A1 (en) * 2019-11-12 2022-06-16 Zillow, Inc. Presenting Building Information Using Building Models
US20230196684A1 (en) * 2019-11-12 2023-06-22 MFTB Holdco, Inc. Presenting Building Information Using Video And Building Models
US10825247B1 (en) * 2019-11-12 2020-11-03 Zillow Group, Inc. Presenting integrated building information using three-dimensional building models
US20210150805A1 (en) * 2019-11-14 2021-05-20 Qualcomm Incorporated Layout estimation using planes
US20210150088A1 (en) * 2019-11-18 2021-05-20 Autodesk, Inc. Building information model (bim) element extraction from floor plan drawings using machine learning
US20210192748A1 (en) * 2019-12-18 2021-06-24 Zoox, Inc. Prediction on top-down scenes based on object motion
US20210199809A1 (en) * 2019-12-30 2021-07-01 Matterport, Inc. System and method of capturing and generating panoramic three-dimensional images
US11763478B1 (en) * 2020-01-17 2023-09-19 Apple Inc. Scan-based measurements
US20210225090A1 (en) * 2020-01-17 2021-07-22 Apple Inc Floorplan generation based on room scanning
US20210256177A1 (en) * 2020-02-14 2021-08-19 Pic2Sketch System and method for creating a 2D floor plan using 3D pictures
US20210272308A1 (en) * 2020-02-27 2021-09-02 Dell Products L.P. Automated capacity management using artificial intelligence techniques
US20230099352A1 (en) * 2020-02-27 2023-03-30 Tailorbird, Inc. Apparatus and method of converting digital images to three-dimensional construction images
US20210272358A1 (en) * 2020-02-28 2021-09-02 Aurora Solar Inc. Automated three-dimensional building model estimation
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
US20210303757A1 (en) * 2020-03-30 2021-09-30 Kla Corporation Semiconductor fabrication process parameter determination using a generative adversarial network
US20210312203A1 (en) * 2020-04-06 2021-10-07 Nvidia Corporation Projecting images captured using fisheye lenses for feature detection in autonomous machine applications
US20210343073A1 (en) * 2020-04-13 2021-11-04 Charles C. Carrington Georeferencing a generated floorplan and generating structural models
US20210326026A1 (en) * 2020-04-17 2021-10-21 Occipital, Inc. System and user interface for viewing and interacting with three-dimensional scenes
US20210352222A1 (en) * 2020-05-06 2021-11-11 At&T Intellectual Property I, L.P. System for infinite windows with optical disparity and depth resolution
US11006041B1 (en) * 2020-05-07 2021-05-11 Qualcomm Incorporated Multiple camera system for wide angle imaging
US20210348927A1 (en) * 2020-05-08 2021-11-11 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and recording medium
US20210375062A1 (en) * 2020-05-29 2021-12-02 Open Space Labs, Inc. Machine learning based object identification using scaled diagram and three-dimensional model
US20220300669A1 (en) * 2020-06-05 2022-09-22 Hangzhou Qunhe Information Technology Co., Ltd. An auxiliary method for graphic home improvement design
US20210385378A1 (en) * 2020-06-05 2021-12-09 Zillow, Inc. Automated Generation On Mobile Devices Of Panorama Images For Building Locations And Subsequent Use
US20210383501A1 (en) * 2020-06-09 2021-12-09 Axis Ab Aligning digital images
US20220180595A1 (en) * 2020-06-12 2022-06-09 Boom Interactive, Inc. System and method for creating three-dimensional renderings of environments from two-dimensional images
US20220027656A1 (en) * 2020-07-24 2022-01-27 Ricoh Company, Ltd. Image matching method and apparatus and non-transitory computer-readable medium
US20230290132A1 (en) * 2020-07-29 2023-09-14 Magic Leap, Inc. Object recognition neural network training using multiple data sources
US20220076019A1 (en) * 2020-09-04 2022-03-10 Zillow, Inc. Automated Analysis Of Image Contents To Determine The Acquisition Location Of The Image
US20220092227A1 (en) * 2020-09-22 2022-03-24 Zillow, Inc. Automated Identification And Use Of Building Floor Plan Information
US20220108044A1 (en) * 2020-10-02 2022-04-07 OPR General Contracting, Inc. Kitchen Renovation System
US20220107977A1 (en) * 2020-10-05 2022-04-07 Modern Adjusting Services, LLC Methods, systems, and software for inspection of a structure
US20220114291A1 (en) * 2020-10-13 2022-04-14 Zillow, Inc. Automated Tools For Generating Building Mapping Information
US20220124298A1 (en) * 2020-10-21 2022-04-21 Coretronic Corporation Method and electronic apparatus for stitching three-dimensional spherical panorama
US20220148327A1 (en) * 2020-11-10 2022-05-12 Autodesk, Inc. Machine learning techniques for extracting floorplan elements from architectural drawings
US20220156426A1 (en) * 2020-11-13 2022-05-19 Qualcomm Technologies, Inc. Scene layout estimation
US20220224833A1 (en) * 2021-01-08 2022-07-14 Zillow, Inc. Automated Determination Of Image Acquisition Locations In Building Interiors Using Multiple Data Capture Devices
US11252329B1 (en) * 2021-01-08 2022-02-15 Zillow, Inc. Automated determination of image acquisition locations in building interiors using multiple data capture devices
US20220269885A1 (en) * 2021-02-25 2022-08-25 Zillow, Inc. Automated Direction Of Capturing In-Room Information For Use In Usability Assessment Of Buildings
US20220269888A1 (en) * 2021-02-25 2022-08-25 Zillow, Inc. Automated Usability Assessment Of Buildings Using Visual Data Of Captured In-Room Images
US20220284146A1 (en) * 2021-03-05 2022-09-08 Flyreel, Inc. Semi-supervised 3d indoor layout estimation from a single 360 degree panorama
US11094135B1 (en) * 2021-03-05 2021-08-17 Flyreel, Inc. Automated measurement of interior spaces through guided modeling of dimensions
US20220292421A1 (en) * 2021-03-09 2022-09-15 Patrick E Murphy Methods and apparatus for artificial intelligence conversion of change orders into an actionable interface
US20220292289A1 (en) * 2021-03-11 2022-09-15 GM Global Technology Operations LLC Systems and methods for depth estimation in a vehicle
US20220287530A1 (en) * 2021-03-15 2022-09-15 Midea Group Co., Ltd. Method and Apparatus for Localizing Mobile Robot in Environment
US20220375170A1 (en) * 2021-05-21 2022-11-24 Occipital, Inc. System for generation of floor plans and three-dimensional models
US20220383027A1 (en) * 2021-05-28 2022-12-01 Verizon Patent And Licensing Inc. Methods and Systems for Augmented Reality Room Identification Based on Room-Object Profile Data
US20220391627A1 (en) * 2021-06-01 2022-12-08 Buildingestimates.Com Limited Rapid and accurate modeling of a building construction structure including estimates, detailing, and take-offs using artificial intelligence
US20220406007A1 (en) * 2021-06-21 2022-12-22 The Travelers Indemnity Company Systems and methods for artificial intelligence (ai) three-dimensional modeling
US20230035601A1 (en) * 2021-07-28 2023-02-02 OPAL AI Inc. Floorplan Generation System And Methods Of Use
US20230051749A1 (en) * 2021-08-12 2023-02-16 Adobe Inc. Generating synthesized digital images utilizing class-specific machine-learning models
US20230093087A1 (en) * 2021-09-17 2023-03-23 Yembo, Inc. Browser optimized interactive electronic model based determination of attributes of a structure
US20230106339A1 (en) * 2021-09-22 2023-04-06 Awe Company Limited 2d and 3d floor plan generation
US20230095173A1 (en) * 2021-09-22 2023-03-30 MFTB Holdco, Inc. Automated Exchange And Use Of Attribute Information Between Building Images Of Multiple Types
US11842464B2 (en) * 2021-09-22 2023-12-12 MFTB Holdco, Inc. Automated exchange and use of attribute information between building images of multiple types
US11410362B1 (en) * 2021-10-12 2022-08-09 Procore Technologies, Inc. Automatic area detection
US11282268B1 (en) * 2021-10-27 2022-03-22 Flyreel, Inc. Top-down view mapping of interior spaces
US20230184949A1 (en) * 2021-12-09 2023-06-15 Robert Bosch Gmbh Learning-based system and method for estimating semantic maps from 2d lidar scans
US20230184537A1 (en) * 2021-12-10 2023-06-15 Flyreel, Inc. Provoking movement of computing devices with augmented reality features
US20230185978A1 (en) * 2021-12-14 2023-06-15 Buildots Ltd. Interactive gui for presenting construction information at construction projects
US20230206393A1 (en) * 2021-12-28 2023-06-29 MFTB Holdco., Inc. Automated Building Information Determination Using Inter-Image Analysis Of Multiple Building Images
US20230290072A1 (en) * 2022-03-10 2023-09-14 Matterport, Inc. System and method of object detection and interactive 3d models
US20230325547A1 (en) * 2022-04-08 2023-10-12 Honeywell International Inc. Fire system floor plan layout generation
US11783560B1 (en) * 2022-06-03 2023-10-10 Amazon Technologies, Inc. Three-dimensional room modeling and feature annotation
US11830135B1 (en) * 2022-07-13 2023-11-28 MFTB Holdco, Inc. Automated building identification using floor plans and acquired building images

Also Published As

Publication number Publication date
US11960799B2 (en) 2024-04-16
US11699001B2 (en) 2023-07-11
CN117744196A (en) 2024-03-22
US20220114298A1 (en) 2022-04-14
EP4229552A4 (en) 2024-03-06
EP4229552A1 (en) 2023-08-23
CN116406461A (en) 2023-07-07
CN116406461B (en) 2023-10-20
WO2022081717A1 (en) 2022-04-21

Similar Documents

Publication Publication Date Title
US11252329B1 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
US11408738B2 (en) Automated mapping information generation from inter-connected images
US11238652B2 (en) Presenting integrated building information using building models
CA3090629C (en) Automated tools for generating mapping information for buildings
US11480433B2 (en) Use of automated mapping information from inter-connected images
US11481925B1 (en) Automated determination of image acquisition locations in building interiors using determined room shapes
US11836973B2 (en) Automated direction of capturing in-room information for use in usability assessment of buildings
US11790648B2 (en) Automated usability assessment of buildings using visual data of captured in-room images
US11632602B2 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
US11501492B1 (en) Automated room shape determination using visual data of multiple captured in-room images
US11094135B1 (en) Automated measurement of interior spaces through guided modeling of dimensions
US11960799B2 (en) Generating measurements of physical structures and environments through automated analysis of sensor data
US20230184537A1 (en) Provoking movement of computing devices with augmented reality features
EP4293562A1 (en) Automated tools for assessing building mapping information generation
US20240029352A1 (en) Automated Tools For Incremental Generation Of Building Mapping Information

Legal Events

Date Code Title Description
AS Assignment

Owner name: FLYREEL, INC., COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PALMER, VICTOR;TRAN, VU;WEBB, BRIAN;AND OTHERS;SIGNING DATES FROM 20210510 TO 20210525;REEL/FRAME:063448/0227

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE