US20170078593A1 - 3d spherical image system - Google Patents

3d spherical image system Download PDF

Info

Publication number
US20170078593A1
US20170078593A1 US14/855,742 US201514855742A US2017078593A1 US 20170078593 A1 US20170078593 A1 US 20170078593A1 US 201514855742 A US201514855742 A US 201514855742A US 2017078593 A1 US2017078593 A1 US 2017078593A1
Authority
US
United States
Prior art keywords
operator
facing camera
capturing
image data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/855,742
Inventor
Avideh Zakhor
Eric Lee Turner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Indoor Reality
Hilti AG
Original Assignee
Indoor Reality Inc
Indoor Reality
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Indoor Reality Inc, Indoor Reality filed Critical Indoor Reality Inc
Priority to US14/855,742 priority Critical patent/US20170078593A1/en
Assigned to Indoor Reality reassignment Indoor Reality ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TURNER, ERIC LEE, ZAKHOR, AVIDEH
Priority to US14/947,869 priority patent/US10127718B2/en
Assigned to INDOOR REALITY INC. reassignment INDOOR REALITY INC. CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED AT REEL: 036772 FRAME: 0888. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT . Assignors: TURNER, ERIC LEE, ZAKHOR, AVIDEH
Publication of US20170078593A1 publication Critical patent/US20170078593A1/en
Assigned to HILTI AG reassignment HILTI AG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INDOOR REALITY, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • H04N5/3415
    • G06K9/00758
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • G06T3/12
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • G06T7/2046
    • G06T7/2093
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • H04N13/0239
    • H04N13/0278
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/41Extracting pixel data from a plurality of image sensors simultaneously picking up an image, e.g. for increasing the field of view by combining the outputs of a plurality of sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0088Synthesising a monoscopic image signal from stereoscopic images, e.g. synthesising a panoramic or high resolution monoscopic image

Definitions

  • Traditional solutions to capturing spherical images include a number of cameras positioned in a ball configuration. While traditional solutions may provide a convenient structural platform for an operator to use, often times the operator occupies a large portion of the captured and rendered image. In some applications, the presence of an operator in the image is not critical. However, in other applications, the presence of an operator may be undesirable or distracting. As such, 3D spherical image systems are presented herein.
  • Systems for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment including: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system; a leftward facing camera for capturing real-time leftward facing image data; a backward facing camera for capturing real-time backward facing image data; a frontward facing camera for capturing real-time frontward facing image data; an upward facing camera for capturing real-time upward facing image data; a number of laser scanners; and an inertial movement unit (IMU), where data from the number of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and where image data from the cameras is rendered to provide image texture blending of the 3D virtual model.
  • IMU inertial movement unit
  • the laser scanners include: a first laser scanner for scanning a horizontal plane; a second laser scanner for scanning a first vertical plane normal to a direction of motion; a third laser scanner for scanning a second vertical plane normal to the direction of motion; and a fourth laser scanner for scanning a third vertical plane tangent to the direction of motion.
  • the cameras and the number of laser scanners are each positioned with an unobstructed field of view.
  • data collection sources include: a spectrometer for capturing light source data, a barometer for capturing atmospheric pressure data, a magnetometer for capturing magnetic field data, a thermometer for capturing temperature data, a wireless local area network (WLAN) packet capture device for capturing WLAN data, a CO 2 meter for capturing carbon dioxide data, and a lux meter for measuring luminance data.
  • systems further include an assembly such as: a backpack assembly for carrying the system by an operator, a motorized assembly for carrying the system by an autonomous robotic device, a motorized assembly for carrying the system by a semi-autonomous robotic device, and a motorized assembly for carrying the system by an operator guided robotic device.
  • the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head, where the upward facing camera of the backpack assembly is positioned at approximately the height of an operator's head and extends above the operator's head, where the rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder, and where the leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder, where the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured.
  • image data from the cameras is stitched together to provide a seamless image texture.
  • the seamless image texture results in a seamless four pi steradian spherical image texture.
  • the seamless image texture results in a seamless cubic image texture.
  • backpack assemblies for carrying a system by an operator for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment
  • the backpack assemblies including: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system, where the rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder; a leftward facing camera for capturing real-time leftward facing image data, where the leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder; a backward facing camera for capturing real-time backward facing image data; a frontward facing camera for capturing real-time frontward facing image data, where the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head; an upward facing camera for capturing real-time upward facing image data, where the upward facing camera
  • the laser scanners include: a first laser scanner for scanning a horizontal plane; a second laser scanner for scanning a first vertical plane normal to a direction of motion; a third laser scanner for scanning a second vertical plane normal to the direction of motion; and a fourth laser scanner for scanning a third vertical plane tangent to the direction of motion.
  • FIG. 1 is an illustrative representation of a 3D spherical image system in accordance with embodiments of the present invention
  • FIG. 2 is an illustrative representation of a 3D spherical image system on a backpack assembly in accordance with embodiments of the present invention.
  • FIG. 3 is an illustrative side view representation of a 3D spherical image system on a backpack assembly carried by an operator in accordance with embodiments of the present invention.
  • the present invention may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals /per se/, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • FPGA field-programmable gate arrays
  • PLA programmable logic arrays
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIG. 1 is an illustrative representation of a 3D spherical image system 100 in accordance with embodiments of the present invention.
  • a number of cameras may be mounted in system embodiments featured herein.
  • embodiments illustrated include: rightward facing camera 102 for capturing real-time rightward facing images; leftward facing camera 104 for capturing real-time leftward facing image data; backward facing camera 106 for capturing real-time backward facing image data; frontward facing camera 108 for capturing real-time frontward facing image data; and upward facing camera 110 for capturing real-time upward facing image data.
  • Image data from the cameras may be rendered to provide image texture blending of a 3D virtual model.
  • cameras may sample at a frame rate of approximately one frame every 4 seconds to 30. Still further, in embodiments, the cameras may be positioned to have an overlapping field of view (FOV). In some embodiments, the cameras have a field of view (FOV) of approximately 90 to 360°.
  • FOV field of view
  • image data from the cameras may be stitched together to provide a seamless image texture.
  • the seamless image texture may provide a seamless four pi steradian spherical image texture or a seamless cubic image texture without limitation.
  • laser scanners including: laser scanner 120 for scanning a horizontal plane; laser scanner 122 for scanning a first vertical plane normal to a direction of motion (DOM) 140 ; laser scanner 124 for scanning a second vertical plane normal to DOM 140 ; and laser scanner 126 for scanning a third vertical plane tangent to DOM 140 .
  • DOM direction of motion
  • laser scanners are utilized to provide range data.
  • a horizontal plane laser scanner may be utilized to determine how the system moves in the X and Y directions that is, for 2D localization.
  • first and second vertical planes normal to the direction of motion laser scanners may be stacked successively as the operator moves in order to determine geometry of the surrounding environment of the operator.
  • third vertical plane tangent to the direction of motion laser scanner may be utilized to determine how the system moves in the Z direction that is, for Z-direction localization.
  • at least one inertial movement unit (IMU) for providing orientation and movement of the system.
  • IMU inertial movement unit
  • data from the laser scanners and the IMU captures 3D geometry information of the real environment and may be rendered to provide a 3D virtual model of the real environment.
  • the laser scanners sample at a rate in the range of approximately 5 to 60 Hz.
  • the IMU samples at a rate in a range of approximately 150 to 300 Hz.
  • the cameras and the laser scanners may each be positioned with an unobstructed field of view. In this manner, resulting 3D spherical images may be rendered without or with minimal appearance of an operator, of capture devices, or of associated hardware.
  • FIG. 2 is an illustrative representation of a 3D spherical image system 200 on a backpack assembly 202 in accordance with embodiments of the present invention.
  • Backpack assembly may include a data processing component 204 , which component may include an electronic computing device coupled with a power supply such as a battery pack.
  • a data processing component 204 may include an electronic computing device coupled with a power supply such as a battery pack.
  • WLAN wireless local area network
  • deployment assemblies may include: motorized assembly for carrying the system by an autonomous robotic device, a motorized assembly for carrying the system by a semi-autonomous robotic device, and a motorized assembly for carrying the system by an operator guided robotic device.
  • motorized assemblies may be land based, air based, or water based without limitation and without departing from embodiments provided herein.
  • FIG. 3 is an illustrative side view representation of a 3D spherical image system on a backpack assembly 300 carried by operator 302 in accordance with embodiments of the present invention.
  • traditional solutions to capturing spherical images include a number of cameras positioned in a ball configuration. While traditional solutions may provide a convenient structural platform for an operator to use, often times the operator occupies a large portion of the captured and rendered image. In some applications, the presence of an operator in the image is not critical. However, in other applications, the presence of an operator may be undesirable or distracting. As illustrated, cameras may be placed to minimize or eliminate the operator from the imagery captured to render the 3D virtual model.
  • frontward facing camera 304 of the backpack assembly may be positioned at approximately a height of operator head 314 and may extend beyond operator head 314 .
  • the position of frontward facing camera embodiments may be achieved through various lengths of extension arms, a sliding assembly, or may be temporarily secured on the human operator without limitation.
  • upward facing camera 306 of the backpack assembly may positioned at approximately the height of operator head 314 and may extend above operator head 314 .
  • rightward facing camera 308 of the backpack assembly may be positioned at approximately the height of operator right shoulder 318 and may extend beyond operator right shoulder 318 .
  • a leftward facing camera of the backpack assembly (not shown) may be positioned at approximately a height of the operator's left shoulder and may extend beyond the operator's left shoulder.
  • the cameras may each be positioned with an unobstructed field of view.
  • resulting 3D spherical images may be rendered without or with minimal appearance of an operator, of capture devices, or of associated hardware.
  • the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured.
  • the position of the various cameras may be adjusted to minimize or eliminate the appearance of a vehicle or device by which 3D spherical image systems are deployed.
  • the orientation of the various cameras and lasers may be preserved to ensure a seamless and complete 3D virtual model.
  • x is forward
  • y is leftward
  • z is upward.
  • a yaw scanner scans the x-y plane
  • a pitch scanner scans the x-z plane
  • a left vertical geometry scanner scans the y-z plane.
  • the yaw scanner can resolve yaw rotations about the z axis and the pitch scanner about the y axis.
  • scan matching may be applied on successive laser scans from the yaw scanner and integrate the translations and rotations obtained from scan matching to recover x, y, and yaw of the backpack over time.
  • scan matching may be applied on successive laser scans from the pitch scanner to recover x, z, and pitch.
  • the assumption of scanning the same plane roughly holds for both the yaw and the pitch scanners.
  • empirical data indicates that coplanarity assumption remains more valid if the effective range of the yaw scanner is limited. In particular, points scanned that are closer to the yaw scanner appear to come from approximately the same plane between two successive scan times.
  • points farther away from the yaw scanner can potentially come from two very different planes between two successive scan times, for example if between these times the backpack experiences a large pitch change. These scan points that clearly come from different planes between two scan times cannot be aligned by scan matching. Thus, it may be desirable to discard points farther than a certain threshold away from the yaw scanner. At a scanner range of up to approximately 15 meters, nearly all the yaw scanner's range data between two successive scan times to appear to roughly come from the same plane.
  • Laser/IMU based localization algorithms may be utilized to estimate the transformation between backpack poses at consecutive time steps. These transformations may be composed to reconstruct the entire trajectory the backpack traverses. However, since each transformation is somewhat erroneous, the error in the computed trajectory can become large over time, resulting in errors. Therefore, an automatic loop closure detection method based on images collected by the backpack may be applied. Once loops are detected, loop closure may be enforced using a nonlinear optimization technique in a Tree based netwORk Optimzer (TORO) or any other optimization or bundle adjustment framework to reduce the overall localization error. Using the pose information provided by localization algorithms, all captured laser scans may be transformed into a single 3D coordinate frame.
  • TORO Tree based netwORk Optimzer

Abstract

Systems for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment are presented, the systems including: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system; a leftward facing camera for capturing real-time leftward facing image data; a backward facing camera for capturing real-time backward facing image data; a frontward facing camera for capturing real-time frontward facing image data; an upward facing camera for capturing real-time upward facing image data; a number of laser scanners; and an inertial movement unit (IMU), where data from the number of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and where image data from the cameras is rendered to provide image texture blending of the 3D virtual model.

Description

    BACKGROUND
  • In recent years, three-dimensional modeling of indoor and outdoor environments has attracted much interest due to its wide range of applications such as virtual reality, disaster management, virtual heritage conservation, and mapping of potentially hazardous sites. Manual construction of these models is labor intensive and time consuming Interior modeling in particular poses significant challenges, the primary one being presence of an operator in the resulting model, followed by lack of GPS indoors.
  • Traditional solutions to capturing spherical images include a number of cameras positioned in a ball configuration. While traditional solutions may provide a convenient structural platform for an operator to use, often times the operator occupies a large portion of the captured and rendered image. In some applications, the presence of an operator in the image is not critical. However, in other applications, the presence of an operator may be undesirable or distracting. As such, 3D spherical image systems are presented herein.
  • BRIEF SUMMARY
  • The following presents a simplified summary of some embodiments of the invention in order to provide a basic understanding of the invention. This summary is not an extensive overview of the invention. It is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some embodiments of the invention in a simplified form as a prelude to the more detailed description that is presented below.
  • Systems for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment are presented, the systems including: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system; a leftward facing camera for capturing real-time leftward facing image data; a backward facing camera for capturing real-time backward facing image data; a frontward facing camera for capturing real-time frontward facing image data; an upward facing camera for capturing real-time upward facing image data; a number of laser scanners; and an inertial movement unit (IMU), where data from the number of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and where image data from the cameras is rendered to provide image texture blending of the 3D virtual model. In some embodiments, the laser scanners include: a first laser scanner for scanning a horizontal plane; a second laser scanner for scanning a first vertical plane normal to a direction of motion; a third laser scanner for scanning a second vertical plane normal to the direction of motion; and a fourth laser scanner for scanning a third vertical plane tangent to the direction of motion. In some embodiments, the cameras and the number of laser scanners are each positioned with an unobstructed field of view. In some embodiments, data collection sources include: a spectrometer for capturing light source data, a barometer for capturing atmospheric pressure data, a magnetometer for capturing magnetic field data, a thermometer for capturing temperature data, a wireless local area network (WLAN) packet capture device for capturing WLAN data, a CO2 meter for capturing carbon dioxide data, and a lux meter for measuring luminance data. In some embodiments, systems further include an assembly such as: a backpack assembly for carrying the system by an operator, a motorized assembly for carrying the system by an autonomous robotic device, a motorized assembly for carrying the system by a semi-autonomous robotic device, and a motorized assembly for carrying the system by an operator guided robotic device. In some embodiments, the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head, where the upward facing camera of the backpack assembly is positioned at approximately the height of an operator's head and extends above the operator's head, where the rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder, and where the leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder, where the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured. In some embodiments, image data from the cameras is stitched together to provide a seamless image texture. In some embodiments, the seamless image texture results in a seamless four pi steradian spherical image texture. In some embodiments, the seamless image texture results in a seamless cubic image texture.
  • In other embodiments backpack assemblies for carrying a system by an operator for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment are presented, the backpack assemblies including: a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system, where the rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder; a leftward facing camera for capturing real-time leftward facing image data, where the leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder; a backward facing camera for capturing real-time backward facing image data; a frontward facing camera for capturing real-time frontward facing image data, where the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head; an upward facing camera for capturing real-time upward facing image data, where the upward facing camera of the backpack assembly is positioned at approximately the height of an operator's head and extends above the operator's head, and where the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured; a number of laser scanners; and an inertial movement unit (IMU), where data from the number of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and where image data from the cameras is rendered to provide image texture blending of the 3D virtual model. In some embodiments, the laser scanners include: a first laser scanner for scanning a horizontal plane; a second laser scanner for scanning a first vertical plane normal to a direction of motion; a third laser scanner for scanning a second vertical plane normal to the direction of motion; and a fourth laser scanner for scanning a third vertical plane tangent to the direction of motion.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
  • FIG. 1 is an illustrative representation of a 3D spherical image system in accordance with embodiments of the present invention;
  • FIG. 2 is an illustrative representation of a 3D spherical image system on a backpack assembly in accordance with embodiments of the present invention; and
  • FIG. 3 is an illustrative side view representation of a 3D spherical image system on a backpack assembly carried by an operator in accordance with embodiments of the present invention.
  • DETAILED DESCRIPTION
  • As will be appreciated by one skilled in the art, the present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • A computer readable storage medium, as used herein, is not to be construed as being transitory signals /per se/, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
  • The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIG. 1 is an illustrative representation of a 3D spherical image system 100 in accordance with embodiments of the present invention. As illustrated, a number of cameras may be mounted in system embodiments featured herein. For example, embodiments illustrated include: rightward facing camera 102 for capturing real-time rightward facing images; leftward facing camera 104 for capturing real-time leftward facing image data; backward facing camera 106 for capturing real-time backward facing image data; frontward facing camera 108 for capturing real-time frontward facing image data; and upward facing camera 110 for capturing real-time upward facing image data. Image data from the cameras may be rendered to provide image texture blending of a 3D virtual model. In embodiments, cameras may sample at a frame rate of approximately one frame every 4 seconds to 30. Still further, in embodiments, the cameras may be positioned to have an overlapping field of view (FOV). In some embodiments, the cameras have a field of view (FOV) of approximately 90 to 360°. When camera embodiments are electronically coupled with a computing system as shown in FIG. 2, image data from the cameras may be stitched together to provide a seamless image texture. In addition, the seamless image texture may provide a seamless four pi steradian spherical image texture or a seamless cubic image texture without limitation.
  • Further illustrated are a number of laser scanners including: laser scanner 120 for scanning a horizontal plane; laser scanner 122 for scanning a first vertical plane normal to a direction of motion (DOM) 140; laser scanner 124 for scanning a second vertical plane normal to DOM 140; and laser scanner 126 for scanning a third vertical plane tangent to DOM 140. It may be appreciated that the DOM illustrated is provided for clarity in understanding embodiments of the present invention and should not be construed as limiting with respect to path actually traversed. Rather, DOM is indicative of general orientation of the camera embodiments disclosed with respect to the path actually traversed. Laser scanners are utilized to provide range data. For example, a horizontal plane laser scanner may be utilized to determine how the system moves in the X and Y directions that is, for 2D localization. Further, first and second vertical planes normal to the direction of motion laser scanners may be stacked successively as the operator moves in order to determine geometry of the surrounding environment of the operator. Still further third vertical plane tangent to the direction of motion laser scanner may be utilized to determine how the system moves in the Z direction that is, for Z-direction localization. Further included is at least one inertial movement unit (IMU) for providing orientation and movement of the system. Although an IMU is not illustrated, one skilled in the art will recognize that the shape and configuration of an IMU is relatively simple and thus may be mounted in embodiments in a manner that does not occlude other data collection devices. As such, data from the laser scanners and the IMU captures 3D geometry information of the real environment and may be rendered to provide a 3D virtual model of the real environment. In embodiments, the laser scanners sample at a rate in the range of approximately 5 to 60 Hz. In embodiments, the IMU samples at a rate in a range of approximately 150 to 300 Hz. Furthermore, it may be appreciated that, in embodiments, the cameras and the laser scanners may each be positioned with an unobstructed field of view. In this manner, resulting 3D spherical images may be rendered without or with minimal appearance of an operator, of capture devices, or of associated hardware.
  • FIG. 2 is an illustrative representation of a 3D spherical image system 200 on a backpack assembly 202 in accordance with embodiments of the present invention. Backpack assembly may include a data processing component 204, which component may include an electronic computing device coupled with a power supply such as a battery pack. In addition, it may be desirable to include any number of data collection sources such as, for example: a spectrometer for capturing light source data, a barometer for capturing atmospheric pressure data, a magnetometer for capturing magnetic field data, a thermometer for capturing temperature data, a wireless local area network (WLAN) packet capture device for capturing WLAN data, a CO2 meter for capturing carbon dioxide data, and a lux meter for measuring luminance data. Thus, it may be possible to collect data from various data collection sources and to map that data to a 3D virtual model.
  • The illustrated representation is merely one manner in which 3D spherical image system embodiments may be deployed. For example, deployment assemblies may include: motorized assembly for carrying the system by an autonomous robotic device, a motorized assembly for carrying the system by a semi-autonomous robotic device, and a motorized assembly for carrying the system by an operator guided robotic device. Furthermore, motorized assemblies may be land based, air based, or water based without limitation and without departing from embodiments provided herein.
  • Human Operator
  • FIG. 3 is an illustrative side view representation of a 3D spherical image system on a backpack assembly 300 carried by operator 302 in accordance with embodiments of the present invention. As noted above, traditional solutions to capturing spherical images include a number of cameras positioned in a ball configuration. While traditional solutions may provide a convenient structural platform for an operator to use, often times the operator occupies a large portion of the captured and rendered image. In some applications, the presence of an operator in the image is not critical. However, in other applications, the presence of an operator may be undesirable or distracting. As illustrated, cameras may be placed to minimize or eliminate the operator from the imagery captured to render the 3D virtual model. As such, frontward facing camera 304 of the backpack assembly may be positioned at approximately a height of operator head 314 and may extend beyond operator head 314. The position of frontward facing camera embodiments may be achieved through various lengths of extension arms, a sliding assembly, or may be temporarily secured on the human operator without limitation. In addition, upward facing camera 306 of the backpack assembly may positioned at approximately the height of operator head 314 and may extend above operator head 314. Furthermore, rightward facing camera 308 of the backpack assembly may be positioned at approximately the height of operator right shoulder 318 and may extend beyond operator right shoulder 318. In like manner, a leftward facing camera of the backpack assembly (not shown) may be positioned at approximately a height of the operator's left shoulder and may extend beyond the operator's left shoulder. Furthermore, it may be appreciated that, in embodiments, the cameras may each be positioned with an unobstructed field of view. In this manner, resulting 3D spherical images may be rendered without or with minimal appearance of an operator, of capture devices, or of associated hardware. As such, the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured.
  • It may be appreciated that in motorized configurations as noted above, the position of the various cameras may be adjusted to minimize or eliminate the appearance of a vehicle or device by which 3D spherical image systems are deployed. However, the orientation of the various cameras and lasers may be preserved to ensure a seamless and complete 3D virtual model. With the backpack worn upright, x is forward, y is leftward, and z is upward. In operation, a yaw scanner scans the x-y plane, a pitch scanner scans the x-z plane, and a left vertical geometry scanner scans the y-z plane. Thus, the yaw scanner can resolve yaw rotations about the z axis and the pitch scanner about the y axis.
  • Assuming that the yaw scanner scans the same plane over time, scan matching may be applied on successive laser scans from the yaw scanner and integrate the translations and rotations obtained from scan matching to recover x, y, and yaw of the backpack over time. Likewise, assuming that the pitch scanner scans the same plane over time, scan matching may be applied on successive laser scans from the pitch scanner to recover x, z, and pitch. The assumption of scanning the same plane roughly holds for both the yaw and the pitch scanners. However, empirical data indicates that coplanarity assumption remains more valid if the effective range of the yaw scanner is limited. In particular, points scanned that are closer to the yaw scanner appear to come from approximately the same plane between two successive scan times. However, points farther away from the yaw scanner can potentially come from two very different planes between two successive scan times, for example if between these times the backpack experiences a large pitch change. These scan points that clearly come from different planes between two scan times cannot be aligned by scan matching. Thus, it may be desirable to discard points farther than a certain threshold away from the yaw scanner. At a scanner range of up to approximately 15 meters, nearly all the yaw scanner's range data between two successive scan times to appear to roughly come from the same plane.
  • Laser/IMU based localization algorithms may be utilized to estimate the transformation between backpack poses at consecutive time steps. These transformations may be composed to reconstruct the entire trajectory the backpack traverses. However, since each transformation is somewhat erroneous, the error in the computed trajectory can become large over time, resulting in errors. Therefore, an automatic loop closure detection method based on images collected by the backpack may be applied. Once loops are detected, loop closure may be enforced using a nonlinear optimization technique in a Tree based netwORk Optimzer (TORO) or any other optimization or bundle adjustment framework to reduce the overall localization error. Using the pose information provided by localization algorithms, all captured laser scans may be transformed into a single 3D coordinate frame. Since camera images are acquired at nearly the same time as a subset of the laser scans, nearest-neighbor interpolation of the pose parameters allows the pose of every camera image to be estimated. Therefore, to generate a 3D model, a) all laser scans may be transformed from the floor scanner to a single world coordinate frame and known methods may be utilized to create a triangulated surface model from ordered laser data, and b) the model may be texture mapped by projecting laser scans onto temporally close images. However, laser based localization algorithms alone may not be accurate enough for building textured 3D surface models. Thus, an image based approach to refine the laser/IMU localization results may be utilized to address this inaccuracy.
  • The terms “certain embodiments”, “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean one or more (but not all) embodiments unless expressly specified otherwise. The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise. The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
  • While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents, which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods, computer program products, and apparatuses of the present invention. Furthermore, unless explicitly stated, any method embodiments described herein are not constrained to a particular order or sequence. Further, the Abstract is provided herein for convenience and should not be employed to construe or limit the overall invention, which is expressed in the claims. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.

Claims (20)

What is claimed is:
1. A system for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment, the system comprising:
a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system;
a leftward facing camera for capturing real-time leftward facing image data;
a backward facing camera for capturing real-time backward facing image data;
a frontward facing camera for capturing real-time frontward facing image data;
an upward facing camera for capturing real-time upward facing image data;
a plurality of laser scanners; and
an inertial movement unit (IMU), wherein
data from the plurality of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and wherein
image data from the cameras is rendered to provide image texture blending of the 3D virtual model.
2. The system of claim 1, wherein the plurality of laser scanners comprises:
a first laser scanner for scanning a horizontal plane;
a second laser scanner for scanning a first vertical plane normal to a direction of motion;
a third laser scanner for scanning a second vertical plane normal to the direction of motion; and
a fourth laser scanner for scanning a third vertical plane tangent to the direction of motion.
3. The system of claim 1, wherein
the cameras and the plurality of laser scanners are each positioned with an unobstructed field of view.
4. The system of claim 1, wherein a data collection source selected from the group consisting of:
a spectrometer for capturing light source data,
a barometer for capturing atmospheric pressure data,
a magnetometer for capturing magnetic field data,
a thermometer for capturing temperature data,
a wireless local area network (WLAN) packet capture device for capturing WLAN data,
a CO2 meter for capturing carbon dioxide data, and
a lux meter for measuring luminance data.
5. The system of claim 1, further comprising an assembly selected from the group consisting of: a backpack assembly for carrying the system by an operator, a motorized assembly for carrying the system by an autonomous robotic device, a motorized assembly for carrying the system by a semi-autonomous robotic device, and a motorized assembly for carrying the system by an operator guided robotic device.
6. The system of claim 5, wherein
the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head, wherein
the upward facing camera of the backpack assembly is positioned at approximately the height of an operator's head and extends above the operator's head, wherein
the rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder, and wherein
the leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder, wherein the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured.
7. The system of claim 1, wherein the cameras capture image data at a frame rate of approximately one frame every 4 seconds to 30.
8. The system of claim 1, wherein the cameras are positioned to have an overlapping field of view (FOV).
9. The system of claim 8, wherein the cameras have a field of view (FOV) of approximately 90 to 360°.
10. The system of claim 1, wherein image data from the cameras is stitched together to provide a seamless image texture.
11. The system of claim 10, wherein the seamless image texture results in a seamless four pi steradian spherical image texture.
12. The system of claim 10, wherein the seamless image texture results in a seamless cubic image texture.
13. The system of claim 1, wherein the IMU samples at a rate in a range of approximately 150 to 300 Hz.
14. The system of claim 1, wherein the plurality of laser scanners sample at a rate in a range of approximately 5 to 60 Hz.
15. A backpack assembly for carrying a system by an operator for producing 3D spherical imagery for a virtual walkthrough corresponding with a real environment, the backpack assembly comprising:
a rightward facing camera for capturing real-time rightward facing images, the rightward facing camera electronically coupled with a computing system, wherein the rightward facing camera of the backpack assembly is positioned at approximately a height of an operator's right shoulder and extends beyond the operator's right shoulder;
a leftward facing camera for capturing real-time leftward facing image data, wherein the leftward facing camera of the backpack assembly is positioned at approximately a height of the operator's left shoulder and extends beyond the operator's left shoulder;
a backward facing camera for capturing real-time backward facing image data;
a frontward facing camera for capturing real-time frontward facing image data, wherein the frontward facing camera of the backpack assembly is positioned at approximately a height of an operator's head and extends beyond the operator's head;
an upward facing camera for capturing real-time upward facing image data, wherein the upward facing camera of the backpack assembly is positioned at approximately the height of an operator's head and extends above the operator's head, and wherein the cameras are positioned immediately proximate to the operator to minimize an appearance of an operator in the 3D virtual model and to maximize the real environment being captured;
a plurality of laser scanners; and
an inertial movement unit (IMU), wherein
data from the plurality of laser scanners and the IMU captures 3D geometry information of the real environment and is rendered to provide a 3D virtual model of the real environment, and wherein
image data from the cameras is rendered to provide image texture blending of the 3D virtual model.
16. The backpack assembly of claim 15, wherein the plurality of laser scanners comprises:
a first laser scanner for scanning a horizontal plane;
a second laser scanner for scanning a first vertical plane normal to a direction of motion;
a third laser scanner for scanning a second vertical plane normal to the direction of motion; and
a fourth laser scanner for scanning a third vertical plane tangent to the direction of motion.
17. The backpack assembly of claim 15, wherein
the cameras and the plurality of laser scanners are each positioned with an unobstructed field of view (FOV).
18. The backpack assembly of claim 15, wherein the cameras are positioned to have an overlapping FOV.
19. The backpack assembly of claim 15, wherein image data from the cameras is stitched together to provide a seamless image texture.
20. The backpack assembly of claim 19, wherein the seamless image texture results in a seamless four pi steradian spherical image texture.
US14/855,742 2015-09-16 2015-09-16 3d spherical image system Abandoned US20170078593A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/855,742 US20170078593A1 (en) 2015-09-16 2015-09-16 3d spherical image system
US14/947,869 US10127718B2 (en) 2015-09-16 2015-11-20 Methods for indoor 3D surface reconstruction and 2D floor plan recovery utilizing segmentation of building and object elements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/855,742 US20170078593A1 (en) 2015-09-16 2015-09-16 3d spherical image system

Publications (1)

Publication Number Publication Date
US20170078593A1 true US20170078593A1 (en) 2017-03-16

Family

ID=58257741

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/855,742 Abandoned US20170078593A1 (en) 2015-09-16 2015-09-16 3d spherical image system
US14/947,869 Active 2036-12-08 US10127718B2 (en) 2015-09-16 2015-11-20 Methods for indoor 3D surface reconstruction and 2D floor plan recovery utilizing segmentation of building and object elements

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/947,869 Active 2036-12-08 US10127718B2 (en) 2015-09-16 2015-11-20 Methods for indoor 3D surface reconstruction and 2D floor plan recovery utilizing segmentation of building and object elements

Country Status (1)

Country Link
US (2) US20170078593A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322669A (en) * 2018-03-31 2019-10-11 汉唐传媒股份有限公司 A kind of method for early warning based on reality projection sports ground

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6746790B2 (en) * 2016-10-12 2020-08-26 ヒューレット−パッカード デベロップメント カンパニー エル.ピー.Hewlett‐Packard Development Company, L.P. Sub-volume octree
US10255720B1 (en) * 2016-10-13 2019-04-09 Bentley Systems, Incorporated Hybrid mesh from 2.5D and 3D point data
US10530997B2 (en) 2017-07-13 2020-01-07 Zillow Group, Inc. Connecting and using building interior data acquired from mobile devices
US10375306B2 (en) 2017-07-13 2019-08-06 Zillow Group, Inc. Capture and use of building interior data from mobile devices
CN109493380B (en) * 2017-09-11 2022-05-17 重庆大学 Method for calculating area of irregular shear surface in rock joint surface shear test
CN108303037B (en) * 2018-01-31 2020-05-08 广东工业大学 Method and device for detecting workpiece surface shape difference based on point cloud analysis
US10643386B2 (en) 2018-04-11 2020-05-05 Zillow Group, Inc. Presenting image transition sequences between viewing locations
EP3620941A1 (en) 2018-09-05 2020-03-11 3Frog Nv Generating a spatial model of an indoor structure
CA3058602C (en) 2018-10-11 2023-01-24 Zillow Group, Inc. Automated mapping information generation from inter-connected images
US10708507B1 (en) 2018-10-11 2020-07-07 Zillow Group, Inc. Automated control of image acquisition via use of acquisition device sensors
US10809066B2 (en) 2018-10-11 2020-10-20 Zillow Group, Inc. Automated mapping information generation from inter-connected images
CN109754459B (en) * 2018-12-18 2021-04-27 湖南视觉伟业智能科技有限公司 Method and system for constructing human body three-dimensional model
US10891769B2 (en) * 2019-02-14 2021-01-12 Faro Technologies, Inc System and method of scanning two dimensional floorplans using multiple scanners concurrently
US11281557B2 (en) 2019-03-18 2022-03-22 Microsoft Technology Licensing, Llc Estimating treatment effect of user interface changes using a state-space model
CN109949326B (en) * 2019-03-21 2020-09-08 苏州工业园区测绘地理信息有限公司 Building contour line extraction method based on knapsack type three-dimensional laser point cloud data
US11074749B2 (en) * 2019-04-26 2021-07-27 Microsoft Technology Licensing, Llc Planar surface detection
US11176374B2 (en) 2019-05-01 2021-11-16 Microsoft Technology Licensing, Llc Deriving information from images
US11302081B2 (en) 2019-05-21 2022-04-12 Magic Leap, Inc. Caching and updating of dense 3D reconstruction data
US11238659B2 (en) * 2019-06-26 2022-02-01 Magic Leap, Inc. Caching and updating of dense 3D reconstruction data
US11367264B2 (en) * 2019-07-17 2022-06-21 The Regents Of The University Of California Semantic interior mapology: a tool box for indoor scene description from architectural floor plans
US11243656B2 (en) 2019-08-28 2022-02-08 Zillow, Inc. Automated tools for generating mapping information for buildings
US11164368B2 (en) 2019-10-07 2021-11-02 Zillow, Inc. Providing simulated lighting information for three-dimensional building models
US11164361B2 (en) 2019-10-28 2021-11-02 Zillow, Inc. Generating floor maps for buildings from automated analysis of visual data of the buildings' interiors
US11676344B2 (en) 2019-11-12 2023-06-13 MFTB Holdco, Inc. Presenting building information using building models
US10825247B1 (en) 2019-11-12 2020-11-03 Zillow Group, Inc. Presenting integrated building information using three-dimensional building models
US11405549B2 (en) 2020-06-05 2022-08-02 Zillow, Inc. Automated generation on mobile devices of panorama images for building locations and subsequent use
WO2022016407A1 (en) * 2020-07-22 2022-01-27 Intel Corporation Multi-plane mapping for indoor scene reconstruction
US11514674B2 (en) 2020-09-04 2022-11-29 Zillow, Inc. Automated analysis of image contents to determine the acquisition location of the image
US11592969B2 (en) 2020-10-13 2023-02-28 MFTB Holdco, Inc. Automated tools for generating building mapping information
US11481925B1 (en) 2020-11-23 2022-10-25 Zillow, Inc. Automated determination of image acquisition locations in building interiors using determined room shapes
CA3142154A1 (en) 2021-01-08 2022-07-08 Zillow, Inc. Automated determination of image acquisition locations in building interiors using multiple data capture devices
US11252329B1 (en) 2021-01-08 2022-02-15 Zillow, Inc. Automated determination of image acquisition locations in building interiors using multiple data capture devices
US11790648B2 (en) 2021-02-25 2023-10-17 MFTB Holdco, Inc. Automated usability assessment of buildings using visual data of captured in-room images
US11836973B2 (en) 2021-02-25 2023-12-05 MFTB Holdco, Inc. Automated direction of capturing in-room information for use in usability assessment of buildings
CN113689444A (en) * 2021-07-07 2021-11-23 北京道达天际科技有限公司 Building point cloud monomer segmentation method and device
US11501492B1 (en) 2021-07-27 2022-11-15 Zillow, Inc. Automated room shape determination using visual data of multiple captured in-room images
US11842464B2 (en) 2021-09-22 2023-12-12 MFTB Holdco, Inc. Automated exchange and use of attribute information between building images of multiple types
WO2023208340A1 (en) * 2022-04-27 2023-11-02 Telefonaktiebolaget Lm Ericsson (Publ) Method for determining a view of a 3d point cloud
US11830135B1 (en) 2022-07-13 2023-11-28 MFTB Holdco, Inc. Automated building identification using floor plans and acquired building images

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US20020046218A1 (en) * 1999-06-23 2002-04-18 Scott Gilbert System for digitally capturing and recording panoramic movies
US20020063711A1 (en) * 1999-05-12 2002-05-30 Imove Inc. Camera system with high resolution image inside a wide angle view
US20030202089A1 (en) * 2002-02-21 2003-10-30 Yodea System and a method of three-dimensional modeling and restitution of an object
US20130141418A1 (en) * 2011-12-01 2013-06-06 Avaya Inc. Methods, apparatuses, and computer-readable media for providing at least one availability metaphor of at least one real world entity in a virtual world
US20130194305A1 (en) * 2010-08-30 2013-08-01 Asukalab Inc. Mixed reality display system, image providing server, display device and display program
US20130250047A1 (en) * 2009-05-02 2013-09-26 Steven J. Hollinger Throwable camera and network for operating the same
US20130307842A1 (en) * 2012-05-15 2013-11-21 Imagine Mobile Augmented Reality Ltd System worn by a moving user for fully augmenting reality by anchoring virtual objects
US20140267596A1 (en) * 2013-03-14 2014-09-18 Joergen Geerds Camera system
US20150138311A1 (en) * 2013-11-21 2015-05-21 Panavision International, L.P. 360-degree panoramic camera systems
US20150348580A1 (en) * 2014-05-29 2015-12-03 Jaunt Inc. Camera array including camera modules
US20160140930A1 (en) * 2014-11-13 2016-05-19 WorldViz LLC Methods and systems for virtual and augmented reality
US20160261829A1 (en) * 2014-11-07 2016-09-08 SeeScan, Inc. Inspection camera devices and methods with selectively illuminated multisensor imaging
US20160300392A1 (en) * 2015-04-10 2016-10-13 VR Global, Inc. Systems, media, and methods for providing improved virtual reality tours and associated analytics
US20170094278A1 (en) * 2014-03-17 2017-03-30 Sony Computer Entertainment Europe Limited Image processing
US20170227841A1 (en) * 2014-10-07 2017-08-10 Nokia Technologies Oy Camera devices with a large field of view for stereo imaging

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6396492B1 (en) * 1999-08-06 2002-05-28 Mitsubishi Electric Research Laboratories, Inc Detail-directed hierarchical distance fields
US6990228B1 (en) * 1999-12-17 2006-01-24 Canon Kabushiki Kaisha Image processing apparatus
US8081180B2 (en) * 2006-11-17 2011-12-20 University Of Washington Function-based representation of N-dimensional structures
US8401264B2 (en) * 2005-12-08 2013-03-19 University Of Washington Solid modeling based on volumetric scans
US7940279B2 (en) * 2007-03-27 2011-05-10 Utah State University System and method for rendering of texel imagery
US20140125667A1 (en) * 2009-11-11 2014-05-08 Google Inc. Roof Generation And Texturing Of 3D Models
US20150187130A1 (en) * 2011-02-10 2015-07-02 Google Inc. Automatic Generation of 2.5D Extruded Polygons from Full 3D Models
US9247880B2 (en) * 2012-02-23 2016-02-02 Siemens Aktiengesellschaft Image fusion for interventional guidance

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US20020063711A1 (en) * 1999-05-12 2002-05-30 Imove Inc. Camera system with high resolution image inside a wide angle view
US20020046218A1 (en) * 1999-06-23 2002-04-18 Scott Gilbert System for digitally capturing and recording panoramic movies
US20030202089A1 (en) * 2002-02-21 2003-10-30 Yodea System and a method of three-dimensional modeling and restitution of an object
US20130250047A1 (en) * 2009-05-02 2013-09-26 Steven J. Hollinger Throwable camera and network for operating the same
US20130194305A1 (en) * 2010-08-30 2013-08-01 Asukalab Inc. Mixed reality display system, image providing server, display device and display program
US20130141418A1 (en) * 2011-12-01 2013-06-06 Avaya Inc. Methods, apparatuses, and computer-readable media for providing at least one availability metaphor of at least one real world entity in a virtual world
US20130307842A1 (en) * 2012-05-15 2013-11-21 Imagine Mobile Augmented Reality Ltd System worn by a moving user for fully augmenting reality by anchoring virtual objects
US20140267596A1 (en) * 2013-03-14 2014-09-18 Joergen Geerds Camera system
US20150138311A1 (en) * 2013-11-21 2015-05-21 Panavision International, L.P. 360-degree panoramic camera systems
US20170094278A1 (en) * 2014-03-17 2017-03-30 Sony Computer Entertainment Europe Limited Image processing
US20150348580A1 (en) * 2014-05-29 2015-12-03 Jaunt Inc. Camera array including camera modules
US20170227841A1 (en) * 2014-10-07 2017-08-10 Nokia Technologies Oy Camera devices with a large field of view for stereo imaging
US20160261829A1 (en) * 2014-11-07 2016-09-08 SeeScan, Inc. Inspection camera devices and methods with selectively illuminated multisensor imaging
US20160140930A1 (en) * 2014-11-13 2016-05-19 WorldViz LLC Methods and systems for virtual and augmented reality
US20160300392A1 (en) * 2015-04-10 2016-10-13 VR Global, Inc. Systems, media, and methods for providing improved virtual reality tours and associated analytics

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322669A (en) * 2018-03-31 2019-10-11 汉唐传媒股份有限公司 A kind of method for early warning based on reality projection sports ground

Also Published As

Publication number Publication date
US10127718B2 (en) 2018-11-13
US20170148211A1 (en) 2017-05-25

Similar Documents

Publication Publication Date Title
US20170078593A1 (en) 3d spherical image system
US10495461B2 (en) Surveying system
CN108369743B (en) Mapping a space using a multi-directional camera
WO2017114508A1 (en) Method and device for three-dimensional reconstruction-based interactive calibration in three-dimensional surveillance system
Shariq et al. Revolutionising building inspection techniques to meet large-scale energy demands: A review of the state-of-the-art
US20180096525A1 (en) Method for generating an ordered point cloud using mobile scanning data
JP2018522345A5 (en)
US10929575B2 (en) Modelling system and method
Liu et al. LSFB: A low-cost and scalable framework for building large-scale localization benchmark
Reich et al. Filling the Holes: potential of UAV-based photogrammetric façade modelling
Reich et al. On-line compatible orientation of a micro-uav based on image triplets
Chen et al. The power of indoor crowd: Indoor 3D maps from the crowd
Wei et al. A Compact Handheld Sensor Package with Sensor Fusion for Comprehensive and Robust 3D Mapping
Moemen Multi-Sensor 3D Model Reconstruction in Unknown Environments
Esser et al. Field Robot for High-Throughput and High-Resolution 3D Plant Phenotyping: Towards Efficient and Sustainable Crop Production
Dona Localization of RGB-D Sensors for Robotic and AR Applications
Kelly et al. Landmark integration using GIS and image processing for environmental analysis with outdoor mobile robots
Schindler et al. Real-Time Camera Guidance for 3d Scene Reconstruction
Grehl et al. Mine Planning & Equipment Selection 2015

Legal Events

Date Code Title Description
AS Assignment

Owner name: INDOOR REALITY, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZAKHOR, AVIDEH;TURNER, ERIC LEE;REEL/FRAME:036772/0888

Effective date: 20151012

AS Assignment

Owner name: INDOOR REALITY INC., CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED AT REEL: 036772 FRAME: 0888. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:ZAKHOR, AVIDEH;TURNER, ERIC LEE;REEL/FRAME:039850/0993

Effective date: 20151012

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: HILTI AG, LIECHTENSTEIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INDOOR REALITY, INC.;REEL/FRAME:049542/0552

Effective date: 20190424