EP3210165A1 - Photogrammetric methods and devices related thereto - Google Patents

Photogrammetric methods and devices related thereto

Info

Publication number
EP3210165A1
EP3210165A1 EP15852327.4A EP15852327A EP3210165A1 EP 3210165 A1 EP3210165 A1 EP 3210165A1 EP 15852327 A EP15852327 A EP 15852327A EP 3210165 A1 EP3210165 A1 EP 3210165A1
Authority
EP
European Patent Office
Prior art keywords
interest
image
images
capture device
dimensions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15852327.4A
Other languages
German (de)
French (fr)
Other versions
EP3210165A4 (en
Inventor
Habib FATHI
Daniel CIPRARI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pointivo Inc
Original Assignee
Pointivo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/826,113 external-priority patent/US20160239976A1/en
Application filed by Pointivo Inc filed Critical Pointivo Inc
Priority claimed from PCT/US2015/056752 external-priority patent/WO2016065063A1/en
Publication of EP3210165A1 publication Critical patent/EP3210165A1/en
Publication of EP3210165A4 publication Critical patent/EP3210165A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Definitions

  • the inventions herein relate generally to improvements in photogrammetry and devices suitable for obtaining and utilizing such improvements.
  • Photogrammetry is the science of obtaining measurements from photographs, especially for recovering the exact or nearly-exact positions of surface points. While photogrammetry is emerging as a robust, non-contact technique to obtain measurements of objects, scenes, landscapes, etc., there are limitations to existing methods, some of which, for example, are set forth in the following few paragraphs.
  • Accurate three-dimensional (3D) digital representations of objects can be obtained using methods that utilize active- sensing techniques, such as systems that emit structured light, laser beams or the like, record images of objects illuminated by the emitted light, and then determine the 3D measurements from the recorded images.
  • a laser scanner is an example of a standalone device that utilizes structured light to generate measurements of objects.
  • emission of the structured light used for 2D and 3D image generation can be achieved by including a separate hardware device as a peripheral. This peripheral is configured to emit, for example, structured light to generate a point cloud (or depth map) from which data about the object of interest can be derived using photogrammetric algorithms.
  • peripheral device to provide active sensing methods are provided by, for example, Structure Sensor (see the internet URL structure. io), the DPI-8 kit or the DPI-8SR kit products (see the internet URL www.dotproduct3d.com). While often providing accurate image data, it is nonetheless cumbersome for users to have to add a clamp-on or other type of peripheral equipment to their mobile devices.
  • active sensing means can be integrated into mobile devices, such as in Google's Tango® product.
  • Such stereo images generally have insufficient parallax for high-quality measurement when used to obtain data regarding distant objects ⁇ e.g., objects more than a few (about one to about five) meters away from the cameras).
  • the user will be directed to use a template or framework incorporated in, for example, software associated with the image-capture device to guide orientation of the image-capture device relative to the object of interest.
  • This technique can ensure that a sufficient number of appropriately overlapping images of the object of interest are obtained.
  • the user can be provided with general instructions of how to orient the camera and/or object so as to obtain appropriate overlap. Both of these techniques for guiding the user can be used to provide accurate visualization of the object of interest but are nonetheless cumbersome and prone to user error.
  • the three-dimensional points from an object of interest can be estimated from measurements from two or more photographic images taken from different positions. Corresponding points are identified on each image.
  • a line of sight (or ray) can be constructed from the camera location to the point on the object. Triangulation allows determination of the 3D location of the point both in relation to the object's orientation in space, as well as with regard to that point's orientation and/or position in relation to other points.
  • Photomodeler An example of fairly accurate passive photo grammetry that utilizes multiple images generated from a single camera is provided by Photomodeler (photomodeler.com).
  • This software product allows a user to generate a 3D digital representation of an object of interest from multiple overlapping images, where the relevant detail is provided by the orientation of images in a known area of space.
  • accurate measurements can be obtained from the 3D digital representations of the object(s) of interest.
  • Photomodeler requires a user to conduct explicit calibration that occurs in a separate step to achieve such accuracy. Once the 3D orientation is obtained, measurement and other detail information regarding the object of interest can be provided for use. At least part of this calibration step comprises users perform a manual boundary identification.
  • This calibration process is time consuming, currently requiring the user to generate a chessboard marker comprising a minimum number of images taken from different angles and distances with respect to the image-capture device, whereby more images will provide more accurate calibration.
  • accurate measurements of the object of interest require larger calibration surface (e.g., about 6 ft. x about 6 ft. (1.82 meters by 1.82 meters)).
  • this physical calibration step provides the information necessary to orient the object(s) of interest in space so as to make it possible to provide 3D digital representations of the object(s) of interest thereof so that measurements can be obtained.
  • the invention provides a method for generating 3D digital representations of an object of interest using an image-capture device.
  • An exemplary method comprises receiving a plurality of 2D digital images of a scene, where at least one object of interest is present in the scene.
  • the 2D digital images will at least partially overlap with regard to the object of interest.
  • at least some of the 2D digital overlapping images of the object are processed using methodology that incorporates a structure-from-motion algorithm.
  • representations obtained in accordance with the invention are suitable for generating one or more of a 3D model, a 3D point cloud, a 3D line cloud or a 3D edge cloud, wherein each, independently, corresponds to one or more dimensions in the object.
  • the 3D digital representation, and any data or other information obtainable therefrom is accurate in relation to the dimensions of the actual object, which allows substantially accurate measurements of one or more dimensions of the object to be obtained.
  • the invention provides a method of detecting boundaries in at least one object of interest in a scene.
  • overlapping 2D digital images of an object of interest in a scene are generated.
  • Boundary detection information regarding the object is generated from a process that incorporates a structure-from-motion algorithm.
  • the boundary detection information can be used provide measurements, 3D digital representations, 3D point clouds, 3D line clouds, 3D edge clouds and the like.
  • the overlapping 2D digital images used in the present invention can be obtained from a single image-capture device.
  • the single image-capture device is a video camera.
  • the 2D digital images can be generated by an image-capture device that comprises a passive sensing technique.
  • the 2D digital images can be generated by an image - capture device that consists essentially of a passive sensing technique.
  • the image-capture devices can be integrated into a device such as a smartphone, tablet or wearable device or the image-capture devices can be as stand-alone camera device.
  • the image-capture device can also be incorporated in a specialized measurement device. Accordingly, the present invention relates to one or more devices that incorporate the methods herein.
  • the present invention relates to devices and methods for generating surveys of interior and exterior scenes or objects in the scenes using image capture devices associated with image processing techniques suitable to allow survey information to be returned quickly to a user and where such survey information can optionally be further processed.
  • Such surveys and their included survey information relating to interior and exterior scenes can be used in applications such as construction/remodeling estimation, 3D model generation, insurance policy underwriting and adjusting, interior and exterior design efforts, real estate marketing, inventory management and other areas where it can be desirable to obtain information about features and dimensions of one or more features or objects present in the scene.
  • FIG. 1A is a block diagram of a system 101 according to some embodiments of the present invention.
  • FIG. IB is a flowchart of a method 102 illustrating an exemplary method to obtain 3D digital representations of an object of interest according to the methodology herein.
  • FIG. 2 is a flowchart of a method 201 illustrating an exemplary method to perform the structure-recovery portion 125 of the process of FIG. 1.
  • FIG. 3 is a flowchart of a method 301 illustrating an exemplary methodology for use in navigation applications for robots and the like.
  • FIG. 4 is a flowchart of a method 401 illustrating an exemplary method to perform a simultaneous-localization-and-mapping (SLAM) portion 325 of method 301 of FIG. 3.
  • SLAM simultaneous-localization-and-mapping
  • FIGS. 5 A, 5B and 5C are images that illustrate various steps in obtaining
  • FIGS. 6A, 6B and 6C are images that illustrate various steps in obtaining
  • the invention provides a method for generating 3D digital representations of an object of interest in a scene from an image-capture device.
  • An exemplary method comprises receiving a plurality of 2D digital images of the scene, where at least one object of interest is present in the scene.
  • the 2D digital images will at least partially overlap with regard to the object of interest.
  • at least some of the 2D digital overlapping images of the object are processed using methodology that incorporates a structure-from-motion algorithm.
  • the 3D digital representations obtained in accordance with the invention are suitable for generating one or more of a 3D model, a 3D point cloud, a 3D line cloud or a 3D edge cloud, wherein each, independently, corresponds to one or more dimensions in the object. Further, the 3D digital representations, and any data or other information obtainable therefrom, is accurate in relation to the dimensions of the actual object, which allows accurate measurements of one or more dimensions of the object to be obtained.
  • overlapping images means individual images that each, independently, include at least one object of interest, where such images overlap each other as to one or more dimensions of the object of interest are concerned. “Overlapping” in relation to the invention herein is described in further detail hereinbelow.
  • an "object of interest” encompasses a wide variety of objects such as, for example, structures, parts of structures, landscapes, vehicles, people, animals and the like. Indeed, "object of interest” can be anything from which a 2D image can be obtained and that from which information suitable for generation of accurate 3D digital representations of such objects can be obtained according to the methodology herein.
  • the at least one object of interest can have multiple dimensions, such as linear or spatial dimensions, some or all of which may be of interest, such as to provide measurements or other useful information. Further, the
  • methodology herein can be utilized to generate accurate 3D digital representations of more than one object of interest in a scene, such as a collection of smaller objects (e.g., doors, windows, etc.) associated with a larger object (e.g., the overall dimensions of a building) where such collection of smaller and larger objects are present in the plurality of overlapping 2D images in a scene.
  • a collection of smaller objects e.g., doors, windows, etc.
  • a larger object e.g., the overall dimensions of a building
  • the at least one object of interest can be a roof on a structure that is present in a scene that includes the structure, landscaping and other objects.
  • the length of the roof on a front side of the structure could be at least one dimension of interest.
  • each of the dimensions of the roof could comprise a plurality of dimensions of interest.
  • these one or plurality of dimensions/features will have an actual measurement value that will be obtainable when a physical measurement of the length, depth, etc., is conducted, such as by a linear measurement tool or an electronic distance measurement tool.
  • the overlapping 2D digital images used in the present invention can be obtained from a single image-capture device.
  • the single image-capture device is a video camera.
  • the 2D digital images can be generated by an image-capture device that comprises a passive sensing technique.
  • the 2D digital images can be generated by an image- capture device that consists essentially of a passive sensing technique.
  • the image-capture devices can be integrated into a device such as a smartphone, tablet or wearable device or the image-capture devices can be as stand-alone camera device.
  • the image-capture device can also be incorporated in a specialized measurement device. Accordingly, the present invention relates to one or more devices that incorporate the methods herein.
  • an extracted measurement value of the one or a plurality of dimensions in the object of interest and other useful information can be obtained from using a single passive image-capture device, such as that integrated into a smartphone, tablet, wearable device, digital camera (for example, digital cameras on drones) or the like.
  • video means generally that the images are taken, for example, as single frames in quick succession for playback to provide the illusion of motion to a viewer.
  • video suitable for use in the present invention comprises at least about 24 frames per second (“fps"), or at least about 28 fps or at least about 30 fps (frames per second) or any suitable fps as appropriate in a specific context.
  • image-capture-device calibration is the process of determining internal image-capture-device parameters (e.g., focal length, skew, principal point, and lens distortion) from a plurality of images taken of an object with known dimensions (e.g., a planar surface with a chessboard pattern).
  • Image-capture-device calibration is used for relating image- capture device measurements with measurements in the real "3D" world. Objects in the real world are not only three-dimensional, they are also physical spaces with physical units.
  • the relation between the image-capture device's natural units (pixels) and the units of the physical world (e.g., meters) can be a significant component in any attempt to reconstruct a 3D scene and/or an object incorporated therein.
  • a “calibrated image-capture device” is an image- capture device that has undergone a calibration process.
  • an “uncalibrated image- capture device” is an image-capture device that has not been put through a calibration process, in that no information or substantially no information regarding the internal image-capture device parameters is provided and substantially the only available information about the images is presented in the image/video frame itself.
  • the present invention incorporates a calibrated image-capture device.
  • the present invention incorporates an uncalibrated image-capture device.
  • the present invention extracts metadata (such as EXIF tags) that includes camera-lens data, focal length data, time data, and/or GPS data, and uses that additional data to further process the images into point- edge-cloud data.
  • metadata such as EXIF tags
  • use of a plurality of 2D overlapping images derived from video greatly improves the ease and quality of user capture of the plurality of 2D images that can be processed to provide accurate 3D digital representations of the at least one object of interest, for example, such as to generate substantially accurate measurements of the object.
  • the sequential nature of video has been found by the inventors herein to improve 3D digital representation quality due to an attendant reduction in the errors associated with a user needing to obtain proper overlap of the plurality of overlapping 2D images so that detailed information about the object of interest can be derived.
  • Another advantage of the present invention is the shortened time needed to obtain the overlapping 2D images used in the present invention to create detailed information about the object of interest such that an accurate 3D digital representation can be obtained for use.
  • use of video as the source of the plurality of overlapping 2D images can allow tracking of points that are inside (i.e., tracking points within the boundaries of the images) or outside of the images of the object of interest (i.e., continuing to track points that are first "followed" when in the image frame, and then tracking estimated positions of those points no longer in the images intermediate in time (the points have moved outside the boundaries of the images), so that when those points are in the field of view of later image frames, the later-followed points can be substantially correlated to those same features in the earlier image frames), where such point tracking provides improvements in the 2D-image data used to generate the 3D digital representations the at least one object of interest in a scene.
  • the present invention is particularly suitable for use with image-capture devices that generate a video from which overlapping 2D images can be provided
  • the present invention is not limited to the use of video. That is, the plurality of overlapping 2D images can suitably be provided by an image-capture device that provides 2D still images, such as a "point and shoot" digital camera.
  • the at least two overlapping images can be obtained from images that comprise a suitable parallax between and amongst the images to allow generation of information from which an accurate 3D digital representations of the object(s) can be obtained.
  • a plurality of still 2D images taken in sequence can also be defined as "video" if played back at a speed that allows the perception of motion. Therefore, in some aspects, the plurality of overlapping 2D images can be derived from a plurality of digital still images and/or from video without affecting the substance of the present invention, as long as the plurality of overlapping 2D images that include an object of interest can be suitably processed to generate detailed information from which the accurate 3D digital representations of the object(s) of interest can be generated.
  • the overlapping 2D images of a scene will include at least a portion of the at least one object of interest.
  • at least a portion of the overlapping 2D images of the scene will also be overlapping with regard to the at least one object of interest.
  • the plurality of overlapping 2D images includes at least two (2) suitably overlapping 2D images, where the overlap is in relation to the at least one object of interest.
  • the plurality of overlapping 2D images includes at least 5, at least 10, or at least 15 or at least 20 suitably overlapping 2D images, where the overlap is in relation to the at least one object of interest.
  • overlapping 2D images needed to generate an accurate 3D digital representations of the object(s) of interest in a scene will depend, in part, on factors such as the size, texture, illumination and potential occlusions of the object of interest, as well as the distance of the object of interest from the image-capture device.
  • sequential images extracted from video will possess overlap.
  • the overlap present in sequential images generated from video will depend, in part, on the speed at which the user moves the image-capture device around the at least one object of interest and the orientation of the image-capture device in space with reference to the object of interest.
  • the 2D images can be made suitably overlapping with regard to the at least one object of interest using one or more methods known to one of ordinary skill in the art, such as, in some embodiments, the camera operator taking the successive still images including the at least one object of interest while changing the angular orientation, the linear location, the distance, or a combination thereof in a manner that has the object of interest in each successive image captured.
  • the plurality of overlapping 2D images are suitably processable to allow accurate 3D digital representations of the at least one object of interest to be derived therefrom.
  • the individual images can be overlapped, where such overlap is, in reference to the at least one object of interest, at least about 50% or at least about 60% or at least about 70% or at least about 80% or at least about 90%.
  • the amount of overlap in the individual images in the plurality of overlapping 2D images, as well as the total number of images needed to provide an accurate digital representation of the object of interest will also depend, in part, on the relevant features of the object(s).
  • such relevant features include, for example, the amount of randomness in the object shape, the texture of and size of the at least one object of interest relative to the image- capture device, as well as the complexity and other features of the overall scene.
  • the present invention comprises image-capture devices comprising passive sensing techniques and methods relating thereto utilizing a plurality of overlapping 2D images suitable for generating accurate 3D digital representations the at least one object of interest in a scene.
  • image-capture devices comprising passive sensing techniques and methods relating thereto utilizing a plurality of overlapping 2D images suitable for generating accurate 3D digital representations the at least one object of interest in a scene.
  • the inventors herein have found that accurate 3D digital representations of the object(s) present in a scene can be obtained using a plurality of overlapping 2D images incorporating the object(s) substantially without the use of an active sensor/signal source, such as a laser scanner or the like.
  • passive-image-capture devices means that substantially no active signal source such as a laser or structured light (as opposed to camera flash or general- illumination devices) or sound or other reflective or responsive signal is utilized to measure or otherwise sense at least one object of interest so as to provide the information needed to generate the accurate 3D digital representations of the at least one object of interest present in a scene.
  • active signal source such as a laser or structured light (as opposed to camera flash or general- illumination devices) or sound or other reflective or responsive signal is utilized to measure or otherwise sense at least one object of interest so as to provide the information needed to generate the accurate 3D digital representations of the at least one object of interest present in a scene.
  • accurate in relation to the 3D digital representations of the at least one object of interest comprises, in part, data or other information from which substantially accurate measurements of the object(s) can be obtained as defined elsewhere herein.
  • the present invention further includes passive
  • photogrammetry techniques where the images are obtained from a single image-capture device.
  • no more than one passive image-capture device is used in accordance with the methods herein. This use of images from only a single image-capture device is in contrast to the traditional use of at least two cameras or one or more projectors used to obtain 3D digital representations of the at least one object of interest passive sensing methods as disclosed, for example, in US Patent Publication No. US2013/0083990 and PCT Publication No. WO2013/173383, which are each incorporated by reference as set forth previously.
  • prior-art passive image-capture devices used to generate 3D digital representations of the at least one object of interest utilize at least two cameras (or projectors) displaced in a direction away from one another (e.g., horizontally) so as to obtain at least two differing views of a scene and any objects included therein.
  • the relative depth information of the scene and/or objects present therein can be obtained for display to a viewer with or without processing of the image there between.
  • prior art methods perform poorly if the motion between two frames is too small or limited. In contrast, the methodology herein leverages such small or limited motions and creates improved results such situations.
  • the present invention includes methods of using mobile devices configured with passive image-acquisition capability suitable to provide accurate 3D digital representations of the at least one object of interest in a scene.
  • the methodology herein can be utilized to provide measurements and other useful information regarding the object(s).
  • the present invention includes methods of using mobile devices configured with passive image-acquisition technology, whereby substantially accurate measurements of one or more dimensions of the objects can be obtained.
  • point clouds that incorporate information regarding the at least one object of interest are generated using conventional methods.
  • a "point cloud” is a set of data points in the same coordinate system. In a three-dimensional coordinate system, these points are usually defined by X, Y, and Z coordinates.
  • inventive point clouds can be obtained, where such inventive point clouds further include additional data representative of edge information in the object of interest.
  • one or more of point clouds, edge clouds and line clouds are obtainable according to the methodology herein, wherein each of these aspects can include data or other information from which measurements or other useful information about the at least one object of interest can be generated.
  • Edge cloud is a set of edge points in the same coordinate system and represented by X, Y and Z and comprises one or more discontinuities in depth, surface, orientation, reflection, or illumination.
  • Line cloud is a set of 3D straight lines in the same coordinate system. Each line can be defined using its two end points or Plucker coordinates.
  • the present invention can, in some circumstances, be characterized as "hybrid” in nature in that it is possible to utilize any combination of points, edges, and lines (point+edge+line, point+edge, point+line, edge+line, etc.).
  • point clouds point+edge+line, point+edge, point+line, edge+line, etc.
  • the prior art can only produce point clouds, with the invention herein it is possible to create point clouds, line clouds, and edge clouds and any combination thereof.
  • the prior art solutions only produce a point cloud with an unknown scale. Therefore, 3D measurements cannot be extracted directly from the point cloud.
  • the point cloud has to be scaled first.
  • 3D measurements can, in some embodiments, be extracted directly from the point cloud, line cloud and/or edge cloud data substantially without the need for a scaling step.
  • the accurate 3D digital representations of the at least one object of interest are generated by processing overlapping 2D-image data generated from one or more discontinuities in depth, surface, orientation, reflection, or illumination, wherein such image data is derived from a plurality of overlapping 2D images of an object of interest.
  • the methodology herein utilizes data or other information extracted from a plurality of overlapping 2D images to create a robust data set for image processing wherein a plurality of lines, edges and points included therein are specific to lines, edges and points corresponding to the at least one object of interest as incorporated in the plurality of 2D overlapping images of the object.
  • the inventive methodology can provide one or more of the following improvements over the prior art: 1) the edge-detection method substantially filters out useless data, noise, and frequencies while preserving the important structural properties of the at least one object of interest; 2) the amount of data needed to provide an accurate 3D digital representation of an object is reduced, as is the need for attendant data processing; and 3) the necessary information needed for object detection and segmentation (i.e., object boundaries) is provided which is an unmet need in Building Information Modeling (BIM).
  • BIM Building Information Modeling
  • BIM means an object-oriented building-development tool that utilizes modeling concepts, information technology and software interoperability to design, construct and operate a building project, as well as communicate its details. Further improvements are found in the present invention are found from the
  • the 2D digital images suitable for use in the present invention may be missing some or all of the information stored in EXIF tags. This can allow images other than JPEG images to be used as input data in the present invention.
  • the invention provides a method of detecting boundaries in an object of interest in a scene.
  • overlapping 2D digital images of an object in a scene are generated.
  • Boundary detection information regarding the object is generated from a process that incorporates a structure-from- motion algorithm.
  • the boundary detection information can be used to generate measurements, 3D digital representations, 3D point clouds, 3D line clouds, 3D edge clouds and the like.
  • a “boundary” is a contour in the image plane that represents a change in pixel ownership from one object surface to another.
  • Boundary pixels mark the transition from one relatively constant region to another, where the constant region can comprise one or more of an object of interest or a scene in which the object appears in the image.
  • Boundary detection is a computer vision problem with broad applicability in areas such as feature extraction, contour grouping, symmetry detection, segmentation of image regions, object recognition, categorization and the like. Detecting boundaries is significantly different from simple edge detection, where "edge detection” is a low-level technique to detect an abrupt change in some image feature, such as brightness or color.
  • boundary detection relates to the detection of more global properties, such as texture and, therefore, involves integration of information across an image. So, for example, a heavily textured region might give rise to many edges, but to suitably provide information suitable to generate a 3D digital representation of an object of interest therefrom, there should be substantially no boundary defined within the textured region. Moreover, accurate boundary detection is needed to resolve discontinuities in depth that allow accurate rendering of 3D digital representations.
  • information needed to generate accurate 3D digital representations of the at least one object of interest in a scene can be determined using a "structure-from-motion" algorithm.
  • a structure-from-motion algorithm can be used to extract 3D geometry information from a plurality of overlapping images of an object or a scene.
  • information needed to provide accurate 3D digital representations of the object(s) of interest of the object can be generated from a process that incorporates a structure-from-motion algorithm that estimates camera positions for each image frame in the plurality of overlapping images.
  • many structure-from-motion algorithms incorporate key-point detection and matching, so as to form consistent matching tracks and allowing the solving for camera parameters.
  • an inventive methodology comprises parameterizing a line with two end-points. This parameterization step provides two advantages over existing line- or point-based 3D reconstruction methodologies, such as those provided by prior art structure-from-motion algorithms, because the inventive methods are able to achieve the following.
  • a duality is created between points and lines that is preserved by: a) visual triangulation for calculating 3D coordinates of features (point and lines) and b) reprojecting 3D features into the 2D-image plane. This duality allows interchanging the role of points and lines the mathematical formulations whenever appropriate.
  • a parameterization step facilitates modeling of lens-distortion parameters even when substantially only line level information is present. Due to deviations from rectilinear projection caused by lens distortion, straight lines in a scene are typically transformed into curves in the image of the scene.
  • Existing line-based 3D-reconstruction algorithms assume that the input data (images or video frames) are already undistorted; this necessitates use of pre- calibrated cameras in prior-art methods. In some embodiments, substantially no such assumption is made in the present invention. As such, uncalibrated cameras are particularly suitable for use in the present invention.
  • the present invention allows reprojection errors to be calculated with a weighing function that substantially does not over- or
  • an information processor that uses the present invention performs the following:
  • Figure 1A is a block diagram of a system 101 according to some embodiments of the present invention.
  • system 101 includes one or more cloud-computing servers 181 each connected to the internet 180.
  • a non-transitory computer-readable storage medium 183 has photogrammetry instructions and data structures of the present invention stored thereon.
  • the methods of the present invention execute on cloud-computing server(s) 181 using the photogrammetry instructions and data structures from computer-readable storage medium 183, wherein a user 98 uploads images from still camera 182 and/or video camera 184 into cloud-computing server(s) 181, either directly (e.g., using the cell-phone or other wireless network, or through a conventional personal computer 186 connected to the internet).
  • photogrammetry instructions and data structures of the present invention are transmitted from computer-readable storage medium 183 into local non-transitory computer-readable storage media 187 (such as rotating optical media (e.g., CDROMs or DVDs) or solid-state memory devices (such as SDHC (secure data high-capacity) FLASH devices), which are connected to, plugged into, and/or built into cameras 182 or 184 or conventional personal computers 186 to convert such devices from generic information processors into special-purpose systems that convert image data into photogrammetry data according to the present invention.
  • system 101 omits one or more of the devices shown and still executes the methods of the present invention.
  • Figure IB presents a flowchart of a method 102 illustrating one aspect of the present invention.
  • Figure IB Figure 2, Figure 3, and Figure 4, rectangular boxes represent a function and ovals are the inputs/outputs.
  • block 100 a plurality of overlapping images are received by. These images can be derived from a still image-capture device 182 or a video image-capture device 184 as discussed elsewhere herein.
  • feature lines are detected and matched/tracked in block 105 and block 110, respectively.
  • the output of the detection and matching process of block 110 and block 105 are corresponding lines of block 115 and corresponding points of block 120.
  • a method such as linear methods, are used for structure recovery processes, such as those presented in more detail in reference to Figure 2.
  • an initial estimation of structure and motion data in block 130 is determined based on the structure recovery of block 125.
  • hybrid bundle adjustment techniques in block 135 are used to further refine/optimize the 3D structure and motion data 140 and are used in process 145 to generate a 3D point, line and/or edge cloud 150 representative of the at least one object of interest.
  • 3D structure and motion data 140 are used in a 3D plane detection process 155 to detect 3D planes 160.
  • the 3D point , line and/or edge cloud of block 150 and 3D plane of block 160 are included in intelligent data smoothing in block 165 to generate a 3D digital representation of block 170 incorporating the at least one object of interest.
  • Figure 2 is a flowchart of a method 201 illustrating an exemplary method to perform the structure-recovery portion 125 of the process of Figure IB.
  • the present invention provides notable benefits relating to the ability to utilize a single image capture device to generate the plurality of overlapping images.
  • Figure 2 illustrates such benefits in relation to structure recovery of block 125 called out from Figure 1.
  • block 200 pairwise epipolar geometries are computed, and used in block 205 to build a graph of epipolar geometries.
  • the confidence level for each epipolar geometry is calculated in 210.
  • a connectivity graph is built in block 215.
  • the relative rotations of the various points on the connectivity graph of block 215 are estimated, followed by calculation of global rotations in block 225.
  • the relative translation and scaling factor for the resulting data is determined, whereby the data generated in method 201 of Figure 2 is used to provide an initial estimation of structure and motion 130 for further application to the process set out in Figure 1.
  • the global rotation for each view is calculated by the following methodology:
  • the spatial distance between each point pair in the point cloud will represent the distance between the corresponding physical points in the actual scene. In some embodiments, this is leveraged to extract a wide variety of dimensions and measurements from the point cloud.
  • the obtained knowledge about corner points, edge/boundary points, blobs, ridges, straight lines, curved boundaries, planar surfaces, curved surfaces, and other primitive geometry elements can provide the capability to identify significant parts of the scene and automatically extract corresponding measurements (length, area, volume, etc.).
  • the 2D locations of these primitive geometries are first detected in images or video frames. The image- based coordinates are then converted into 3D coordinates via the calculated camera matrices.
  • the present invention provides accurate 3D digital representations of at least one object in a scene.
  • the level of accuracy of the 3D digital representations of the object(s) of interest is with reference to one or more of the actual dimensions of the object of interest.
  • at least one object of interest is identified, selected or otherwise specified, where the identification, etc., can include identification of at least one dimension of interest in the object, or such identification, etc., may include a plurality of dimensions of interest where each of these dimensions, independently, includes an actual value.
  • the identification, etc., of the at least one object of interest and/or the one or more dimensions in the object(s) can be by either or both of a computer or a user.
  • the accuracy of the measurements obtained according to the invention herein can be characterized in relation to a specified number of pixels.
  • the methodology herein allows a user to obtain measurements of one or more dimensions of the object of interest of up to and including a 1.0 pixel standard deviation or, in other embodiments, a 0.5 pixel standard deviation is provided.
  • pixel size is an aspect of the image-capture device specifications and the distance of the image-capture device from the object of interest. This is illustrated in Table 2 hereinbelow.
  • accuracy in pixels relative to the actual dimensions of the object of interest is represented according to the following formula:
  • Pixel size in object (distance of object of interest from IC device) *
  • the IC (“image capture”) device sensor size, resolution and focal length are features or characteristics of each image-capture device.
  • Table 1 sets out some representative specifications for existing image-capture devices: TABLE 1
  • accuracy of the measurements derived from the image- capture device is also represented in percent error.
  • the methodology herein enables measurements to be derived from the image-capture device having accuracy within, in some embodiments, about 5% or in other embodiments, about 10% or in still other embodiments, about 20% error relative to the actual measurement value of the object of interest. In some embodiments, this error is calculated from the following formula:
  • measurements are required to determine the amount of materials needed for a project.
  • such "estimation levels of accuracy” are equal to or less than about 20%, or in other embodiments, about 15% or in yet other embodiments, about 10% or more than about 5% of the actual dimensions of the at least one object of interest.
  • an extracted measurement value of the at least one object of interest that is one-hundred ten (110) inches (279.4 cm) is within an "estimation level of accuracy" when the actual measurement of the at least one object of interest is 100 inches (254 cm), such that the error is 10%.
  • Situations where such "estimation level of accuracy" would be valuable, for example, are to estimate the materials needed to for carpet, wallpaper, paint, sod, roofing and the like.
  • such "fabrication level of accuracy” means that the extracted measurement value is less than about 5%, or in other embodiments, less than about 3% or in still other embodiments, less than about 2% or less than about 1% of the actual dimensions of the at least one object of interest.
  • Situations where such "fabrication level of accuracy" would be appropriate include, for example, measurements used to manufacture custom cabinets, off-site preparation of construction details (trim), identification of exact dimensions of componentry (e.g., space available for appliances, BIM) and the like.
  • software associated with the methods and devices of the present invention is configured to provide information regarding the error in the measurement presented. For example, in some embodiments, when the measurement of an object is reported to the user as 10 feet (3.048 meters) along one dimension, information about any error in such measurement pixel accuracy or % error is provided as set out elsewhere herein.
  • the 3D digital representations of the at least one object of interest are derived from the plurality of overlapping 2D images
  • the measurements can be obtained substantially without need for a separate scaling step, such as that required to obtain measurements of objects with the Photomodeler product, for example.
  • an image-capture device can be integrated into a mobile device to allow images of the at least one object of interest to be obtained.
  • Software either included in or associated with the mobile device can be suitably configured to allow the 2D-image processing, data generation, generation of the 3D digital representation of the object(s) to substantially occur on the mobile device using software and hardware associated with the device.
  • Such software, etc. can also be configured to present to the user a measurement of one or more dimensions of the object of interest or to store such measurement for use.
  • measurements of the at least one object of interest can be obtained using a marker as a reference.
  • a marker for example, a ruler or other standard sized object can be incorporated in a scene that includes the at least one object of interest.
  • one or more dimensions of the object can be derived using known methods.
  • measurements of the at least one object of interest can be obtained without use of, or in addition to, a marker.
  • the invention utilizes an internal or "intrinsic" reference.
  • intrinsic reference the invention herein allows a user to generate substantially accurate measurements of the at least one object of interest.
  • substantially accurate measurements are provided, in some aspects, by incorporation of the intrinsic reference into the software instructions associated with the image- capture device and/or any hardware into which the device is associated.
  • the intrinsic reference comprises one or more of: i) dimensions generated from at least two focal lengths associated with the image-capture device; ii) a library of standard object sizes incorporated in software provided to the image-capture device; iii) user identification of a reference object in a scene that contains the at least one object of interest; and iv) data from which measurements of the at least one object of interest can be derived, wherein such measurement data is generated from a combination of inertial sensors associated with the image- capture device, where the sensors provide data comprising: (a) an acceleration value from an accelerometer associated with the image-capture device; and (b) an orientation value provided by a gyroscopic sensor present the image-capture device.
  • image-capture devices ⁇ e.g., cameras
  • image-capture devices comprise a short depth of field
  • images which appear focused only on a small 3D slice of the scene.
  • Such features can be utilized in the present invention to allow estimation of the depth or 3D surface of an object of interest from a set of two or more images incorporating that object. These images can be obtained from substantially the same point of view while the image-capture device parameters (e.g., the focal length) are modified.
  • the amount of blur in captured images can be used to provide an estimation of the object depth where such depth can be used to derive measurements of one or more dimensions of interest of the object.
  • a library of standard object identities and sizes can be included in the software associated with the image-capture device to provide data from which measurement data for the at least one object of interest can be derived.
  • the size of one or more objects can serve as a reference when that object appears in the same scene as the at least one object of interest.
  • the known standard dimensions of this switchplate can be used as an intrinsic reference to provide a point of reference from which the dimensions of the object of interest can be derived.
  • the user can identify the intrinsic reference object manually or object recognition methodologies can be used to automatically process the dimension data.
  • the reference object used as the intrinsic reference can be generated from a database of digital photographic and/or video images that are likely to occur in a given environment, for example.
  • a database of common objects present in a construction or contractor setting can be included in software configurations directed toward such users. Items related to household furnishings can be included in software configurations directed toward interior decorators.
  • the database may include photographic and/or video images of structures within some general use or location.
  • the intrinsic reference can be provided by user identification of an object of interest that can serve as a reference.
  • the software associated with the image-capture device and/or the hardware into which the image-capture device is integrated can be configured to allow the user to select an object in the scene to serve as a reference, such as by way a user interface.
  • the user can measure the reference object directly and input the measured value or he can select from a library of standard objects as discussed previously where such database is associated with the software of the present invention.
  • the system 101 will elicit and receive the specification of an object to be used for dimensional calibration, and the user will select the switchplate cover to serve as the intrinsic reference. The user can then measure the dimensions of the switchplate cover and input the dimensions into the appropriate fields in the user interface when that information is elicited. Calculations of the dimensions of the object of interest will then be provided using the methodology set out elsewhere herein.
  • the user in response to the system eliciting an object to be used for dimensional calibration, the user selects the switchplate cover as a reference object and the standard dimensions of a switchplate cover are obtained from a library of standard object sizes incorporated within the software associated with the image-capture device, thereby allowing the measurements of an object of interest to be obtained as set out elsewhere herein.
  • the intrinsic reference can be provided by sensor data obtained from inertial sensors associated with the image-capture device.
  • calculating the image-capture device displacement between two images/frames allows resolution of scale ambiguity.
  • the image-capture device displacement is extracted from data that inertial sensors (e.g., accelerometer and gyroscope) in the image-capture device.
  • inertial sensors e.g., accelerometer and gyroscope
  • a gyroscope measures orientation based on the principles of angular momentum.
  • An accelerometer measures gravitational and non- gravitational acceleration.
  • integration of inertial data generated by movement of the image-capture device over time provides data regarding displacement that, in turn, is utilized to generate measurements of one or more dimensions of the object of interest using known methods.
  • image-capture device- specific data is obtained by system 101 to provide more accurate measurement of the at least one object of interest.
  • the actual image-capture device specifications such as, for example, focal length, lens distortion parameter and principal point are determined through a calibration process.
  • a self-calibration function is performed without image-capture device details, which can occur when such details are not stored.
  • software associated with the image-capture device can suitably estimate information needed to provide measurements of the at least one object of interest.
  • self-calibration of the camera is conducted using the epipolar geometry concept. Epipolar geometry between each image pair can provide us an estimated value of the focal length. The collection of these estimations is used in a prediction model to predict an optimum focal length value.
  • image-capture device As used herein, such image-capture devices in use today are integrated into mobile devices such as “smartphones,” mobile telephones, “tablets,” “wearable devices” (such as where a camera may be embedded or incorporated into clothing, eyeglasses or functional jewelry, etc.), laptop computers, unmanned aerial vehicles (UAVs; e.g., drones, robots), etc. Still further, the image-capture devices 182 and 184 (see Figure 1A) can be associated (such as by being in communication with) desktop computers 186 and cloud-based computers 181. It is
  • image-capture devices will be introduced in the future.
  • image-capture devices are included in the present invention if these devices can be configured to incorporate the inventive methods herein.
  • smartphones are wireless, compact, hand-held devices that, in addition to basic cellular telephone functions, include a range of compact hardware.
  • Typical smartphones have embedded (or "native") digital cameras that include both video and static image-acquisition capabilities, large touchscreen displays, and broadband or Wi-Fi capabilities allowing for the receipt and transmission of large amounts of data to and from the Internet.
  • tablet computers and wearable devices have emerged that provide, in pertinent part, many of the functionalities of smartphones, including image capture and processing capabilities and WiFi and cellular capabilities.
  • Smartphones, tablets and wearable devices not only include a range of hardware, they are also configured to download and run a wide variety of software applications, commonly called “apps.”
  • apps software applications
  • the invention advantageously utilizes basic features of
  • smartphones, tablets, and wearable devices and extends the capabilities of these devices to include accurate and convenient measurement of one or more objects of interest by using the image-capture devices native on such devices.
  • the processes described herein may convert a common smartphone, tablet, wearable device, standalone camera or the like into a measurement tool, medical device or research tool, for example. Such aspects will benefit users by extending the functionality of these devices.
  • devices that include less functionality, such as "standalone" digital cameras or video cameras are also used in some embodiments.
  • Such image-capture devices generally include WiFi and/or cellular capabilities, as well as “apps” so as to provide networked functionality. Accordingly, such image-capture devices can suitably be utilized in accordance with one or more of the inventions herein.
  • One example of a standalone digital camera that can be used is the GoPro ® H3.
  • an image-capture device intended for use by professionals who work with exterior and interior building spaces (e.g., architects, contractors, interior designers, etc.) can be configured with hardware and software suitable to allow the users to obtain measurements that they can use in their respective professional responsibilities.
  • professionals who work with exterior and interior building spaces e.g., architects, contractors, interior designers, etc.
  • hardware and software suitable to allow the users to obtain measurements that they can use in their respective professional responsibilities.
  • the methods herein can also be provided in the form of an application specific integrated circuit ("ASIC") that is customized for the particular uses set out herein.
  • ASIC application specific integrated circuit
  • Such ASIC can be integrated into suitable hardware according to known methods to provide a device configured to operate the methods herein.
  • the present invention relates to mobile devices and the like that are configurable to provide substantially accurate measurements of at least one object of interest, where such measurements are derived from a 3D digital representation of the object of interest obtained according to the methodology herein.
  • the dimensions of a roof can be obtained using a single video camera that includes passive image-capture capability, such as that embedded in a mobile device, thereby eliminating the need to send a person to the location to measure the size of the roof to provide an estimate.
  • the dimensions of a kitchen can be obtained using the passive image-acquisition and processing methods herein thereby allowing cabinets or the like to be sized accurately without the need to send an estimator to the customer' s home.
  • accurate dimensions of a floor area can be provided using measurement derived from distances from wall to wall in a room so as to provide an estimate of the amount of materials needed for a flooring project.
  • locations such as roofs, kitchens, flooring and other locations would provide significant benefits to contractors who currently must first visit a location to obtain substantially accurate measurements prior to being able to provide a close estimation of the cost of a construction job.
  • Such applications are described in the co-assigned US Provisional Application No. 62/165,995, previously incorporated herein.
  • the devices and methods herein are used to provide substantially accurate measurements and characteristics of a person's body so as to allow custom clothing to be prepared for him or her without the need to visit a tailor. In some embodiments, such accurate body measurements are used to facilitate telemedicine applications.
  • the invention herein provides accurate measurement of wound size and other characteristics present on a human or an animal.
  • the present invention further relates to medical devices configured with image- capture devices and associated software that provide the disclosed benefits and features.
  • the accurate 3D digital representations of the object(s) can be used create accurate 3D models of the object of interest, where such 3D models can be generated using 3D printing devices, etc.
  • the methodology herein is utilized in conjunction with navigation utilized for robots, unmanned autonomous vehicles and the like where such navigation utilizes image-capture devices therein.
  • the present invention can be incorporated with Simultaneous Localization And Mapping ("SLAM").
  • SLAM is a method that used in robotic navigation where a robot or autonomous vehicle estimates its location relative to its environment, while simultaneously avoiding any dangerous obstacles.
  • the autonomous vehicle makes observations of surrounding landmarks from poses obtained from one or more image-capture devices associated with the vehicle and probabilistic methods are used to achieve maximum likelihood estimation of the camera trajectory and 3D structure.
  • the methods herein can be performed on a single purpose device.
  • an image capture device intended for use by professionals who work with interior and exterior areas and building spaces (e.g., architects, contractors, interior designers, engineers, landscapers etc.) can be configured with hardware and software suitable to allow the users to obtain information such as measurements that they can use in their respective professional responsibilities.
  • a device configured specifically to generate surveys of interior and exterior scenes using the inventive methods herein comprises an inventive survey device.
  • the present invention relates to devices and methods for generating surveys of interior and exterior scenes or objects in the scenes using image capture devices associated with image processing techniques suitable to allow survey information to be returned quickly to a user and where such survey information can optionally be further processed.
  • Such surveys and their included survey information relating to interior and exterior scenes can be used in applications such as construction/remodeling estimation, 3D model generation, insurance policy underwriting and adjusting, interior and exterior design efforts, real estate marketing, inventory management and other areas where it can be desirable to obtain information about features and dimensions of one or more features or objects present in the scene.
  • the surveying devices and methods can capture information such as measurements, features, dimensions, quantity etc. relating to interior and exterior scenes or objects in the scene while a user is on-site and such information can be returned quickly to the user for use thereof.
  • an image capture device such as those mentioned previously, can be used to generate images of one or more areas of interest.
  • the images used to generate interior survey information according to the present invention can be processed using microprocessor capability native in the image capture device comprising the image capture capability or the images and associated data can be transmitted to a remote server (e.g., to the cloud) for processing outside of the device.
  • a remote server e.g., to the cloud
  • the interior location survey information generated from the images can be returned to the user (e.g., provided on a smartphone or tablet or available for use on a PC etc.).
  • the survey information can be returned for use in one or more apps associated with the user's device.
  • an app can use the survey information obtained from the processed images to provide takeoff information to a user.
  • the survey information can be utilized for a variety of uses as discussed elsewhere herein.
  • the survey information derived from images obtained of the scenes or locations of interest can be used to, for example, generate floorplans, takeoff information, and interior design information to provide information to insurance companies, for real estate marketing, 3D model generation, inventory management and the like.
  • the present invention allows one to obtain information regarding one or more of measurements, location, direction, fixtures (e.g., appliances, furniture, built-in cabinets etc.), floor, wall and ceiling dimensions, the presence or absence of doorways and windows, electrical and plumbing locations, property dimensions (e.g., television size etc.), as well as other information that can be derived from a survey of an interior scene or location.
  • aspects of commercial and residential interior scenes that can be suitably surveyed with the devices and methods of the present invention include in illustrative examples one or more of Internal walls that are straight or curved, stairways, doors, windows, cutouts, holes, island areas, borders, insets, flooring and ceiling dimensions, etc.
  • the surveying devices and methods of the present invention can be utilized to obtain one or more floorplans associated with an interior location of interest.
  • a "floorplan” is a drawing to scale of a location showing a view from above of the relationships between rooms, spaces and other physical features at one level of a structure. Dimensions can be drawn between the walls to specify room sizes and wall lengths.
  • Floorplans may also include details of fixtures like sinks, water heaters, furnaces, manufacturing equipment, etc.
  • apps or other software associated with the present invention can be configured to automatically import measurements and dimensions onto a floorplan generated herein.
  • Floorplans can also include notes for construction to specify finishes, construction methods, or symbols for electrical items.
  • the drawings obtainable from the devices and methods herein are equally suitable for printing to provide, for example, blueprints, or they can be made visible on a device screen for use in a non-paper environment.
  • the data can be generated in CSV (comma- separated values) form for utilization in construction estimation programs operating from a spreadsheet environment.
  • the information can be utilized to generate Autocad ® files that can be utilized to create, for example, 3D models of an interior location where such models may be used to generate architectural, construction, engineering, and other documentation related to a construction project.
  • the interior survey information can be provided for use in the well-known DWG, DXF or STL file formats.
  • the information generated from the surveys of the present invention can be used to generate takeoff information, where such information can be further used to generate materials lists for use in construction, remodeling or the like.
  • the devices and methods of the present invention have wide application to a number of construction and remodeling-related activities where accurate data relating to one or more dimensions of a location is needed.
  • the present invention provides devices and methods for generating takeoffs applicable to construction, remodeling and interior or exterior design.
  • the present invention provides benefits in construction project management. To this end, the devices and methods of the present invention facilitate
  • management of inventory management of construction elements can further enable a contractor to rapidly perform engineering cost analyses while a project is underway, thus better allowing the effects of revisions and change orders on scheduling and project cost to be assessed.
  • a common requirement in construction projects is the generation of an estimate of the cost of a product by utilizing drawings obtained from measurement of a location. As would be recognized, in order to submit a bid for the construction project, a manufacturer's
  • Quantity takeoff is an estimation of the quantities needed to undertake and complete a project based on the drawings and specifications. Quantities takeoff is generally the first part of an estimating process. The remainder of the estimating process includes determining material selection and cost. Quantities may include numerical counts, such as the number of doors and windows in a project, but may also include other quantities such as the volume of concrete or the lineal feet of wall space.
  • takeoffs can be the most time consuming part of a construction or remodeling project because multiple measurements must first be made of the relevant scenes or objects. Moreover, no payment is generally provided for providing takeoffs because they are part of the bidding process. Errors in creating takeoffs using existing methods also generally mandate that amounts of components obtained from analysis of interior dimensions be increased by at least 10%, and sometimes as much as 25%. Because many components are not returnable for credit at the completion of a job, such extra materials cause construction and remodeling jobs to be more expensive and create construction waste. It has been found that the time and accuracy of takeoffs can be greatly enhanced using the surveying devices and methods of the present invention.
  • the surveys generated by the present invention can provide survey information for all or part of a flooring area in an interior location in which flooring materials are to be installed. That is, the surveys of the present invention can be used to provide information that can be used to generate flooring takeoffs.
  • flooring materials are broadly defined to include carpet, carpet tile, ceramic tile, laminate flooring and similar materials.
  • processing to provide flooring material takeoff information generally comprises the following steps that are needed in order for a bid to be provided: the manufacturer's representative or estimator, using the relevant survey information generated from the inventive interior surveying devices and methods herein selects for calculation one or more flooring components, calculates the number of components required and calculates the cost of the components.
  • a parts list can also be generated for ordering and inventory management for the needed components.
  • the present invention provides improvements in devices and methods to allow such flooring takeoffs to be obtained more quickly and easily and, in some aspects, the survey information obtained herein can provide more accurate information, thus leading to more accurate takeoff information obtainable therefrom.
  • measurement and other pertinent dimensional information can be derived from the survey information generated according to the invention herein, thus providing improvements in flooring takeoff generation.
  • inventive devices and methods can also be used to provide accurate information regarding one or more of carpet seam layout and manipulation, cut waste optimization, roll cut sheet
  • a carpet section comprising a pattern can be overlayed onto a floorplan to allow a designer or installer to generate the most optimum placement of carpet sections.
  • Such functionality can not only improve the aesthetic appearance of a final carpet installation, but waste from excess remnant generation can also be reduced.
  • virtual tours can be obtained from the survey information generated from image processing as described elsewhere herein.
  • Virtual tours can provide a way for a user to view different parts of a physical location or site from different perspectives and in different directions.
  • such virtual tours can be useful in many different contexts where it may be advantageous for a user remotely to see a physical site from different angles and points of view.
  • Examples of physical sites that can be presented in a virtual tour of a location include: a house or other property for sale; a hotel room or cruise ship stateroom; a museum, art gallery, or other sightseeing destination, a factory or facility plant for training purposes or the like.
  • the value of a virtual tour can be greatly improved.
  • the survey information provided by the survey devices and methods of the present invention can allow a potential buyer or renter to see the actual dimensions of a room to determine whether her furniture or other fixtures will fit.
  • the ability to obtain accurate dimensions of scenes from the surveying devices and methods of the present invention can allow a user to overlay pictures of furniture etc. that she wishes to buy onto a floorplan or even an actual image of the room to make sure the furniture will fit prior to making a purchase.
  • the surveys of the present invention can be utilized to generate information useful for insurance underwriting and/or for claims adjustment.
  • a destructive event such as a fire
  • an insurance company can obtain a floorplan, virtual tour or the like of an insured's house or facility as a requirement for underwriting a policy or by providing a discount to an existing policy holder.
  • An insurance company may also be interested in obtaining takeoff information and, as mentioned previously, such information is obtainable from the devices and methods herein.
  • the survey functionality of the present invention provides substantially accurate dimensions of an interior and exteriors location to be acquired, along with those of any fixtures incorporated therewith, an insurance company that obtains such a survey prior to an occurrence of a destructive event that results in a claim event can better ensure that the information provided by the insured accurately matches the conditions of the location existing prior to the destructive event.
  • Three-dimensional (3D) models of interior and exterior locations can further be derived from the survey information obtained using the presently described inventions. Besides applicability to virtual tours as described elsewhere herein, such 3D models can be utilized to provide users with an immersive experience regarding a remote location. For example, a 3D model of an airplane interior can allow a potential traveler to understand how much legroom he will have on a flight. Such 3D models can also allow a user to remotely travel to a store to generate an improved online shopping experience. Yet further, 3D models obtained from the interior and exterior surveys of the present invention can be used to provide immersive learning experiences for remote training or the like.
  • the survey devices of the present invention can be used to provide inventories of fixtures or equipment or stock present at a location.
  • one or more images can be taken of a location from which the number of items present can be derived using the survey information obtained from the images.
  • images are obtained from image capture devices present in a warehouse or other type facility, real time inventory management information can be obtained and the present invention has utility in security applications and the like.
  • the survey devices and associated information derived from images generated and processed according to the present invention can be compared to information obtained from a library of image information stored or otherwise obtainable by a user.
  • a database of common objects present in a construction or contractor setting can be included in apps or other software implementations directed toward such users.
  • a survey provides information that an object with a size of 4.5 inches (11.42 cm) in height and 2.75 inches (6.985 cm) in width is present in a scene
  • associated software can return information to the user that the object in the scene is, in high likelihood, a standard US toggle switchplate.
  • Such information can allow a user easily to obtain information regarding the number and location of light switches in a location from images.
  • the library of image data associated with the survey inventions herein can be included in software
  • the survey devices and associated information of the present invention can be used further to generate information regarding interior or exterior locations relating to the presence or absence of required fixtures or equipment.
  • an interior survey can be conducted with the devices and methods of the present invention to determine whether a required piece of equipment is present in a location.
  • many locations require the presence of defibrillators or other safety equipment in a prescribed number and in certain locations.
  • the devices and methods of the present invention can be used to obtain surveys of such locations. Because the required equipment will have a known size and required orientation in a room, survey information obtained from the devices and information of the present invention can be used to determine whether the required equipment— a defibrillator in this example— is present.
  • the survey devices and associated information can be used to determine whether a facility or other location complies with the Americans with Disabilities Act or other types of government regulations where locations are required to have fixtures or construction elements having a certain configuration.
  • survey information obtained according to the present invention can allow determination of whether doorways are suitably wide, ramps are present, etc.
  • the survey devices and associated information can also be used to conduct ground level survey information.
  • the types of outdoor survey information obtainable using the inventive methodology are varied.
  • the inventive devices and methodology can be used to generate one or more of construction surveys, as-built surveys, to generate exterior fixture and equipment inventories, landscaping plans and the like.
  • the surveys and associated information can also have utility for forensic science applications.
  • the devices and methods herein can be used for documenting a crime scene and can provide a capability to make subsequent measurements using captured floorplan and image data for use as evidence. The ability of an investigator to obtain accurate
  • the surveys and associated information can also be used in any application in which surveys that capture accurate measurements objects, fixtures, construction features, etc. of interior or exterior scenes or objects in a scene are desired, where such accurate measurements are generated from images derived from image capture devices as described elsewhere herein.
  • a video stream 300 is provided to method 301.
  • These images can be derived from a video image-capture device (such as camera 184 of Figure 1A) as discussed elsewhere herein.
  • line segments and corner points are detected and tracked in block 305 and block 310, respectively, of method 301.
  • the output of the detection and matching process of block 305 and block 310 includes corresponding line tracks 315 and corresponding point tracks 320.
  • SLAM is conducted as set out in more detail in the description of Figure 4.
  • an initial estimation of structure and motion data resulting from block 325 is determined based on the recovered structure data in block 330.
  • hybrid bundle-adjustment techniques 335 are used to further refine/optimize the 3D structure and motion data 340 and are used, in some embodiments, in process 345 to generate a 3D point, line and/or edge cloud 350 representative of the at least one object of interest.
  • 3D structure and motion data 340 are used in a 3D plane detection process 355 to detect 3D planes 360.
  • the 3D point, line and/or edge cloud 350 and 3D planes 360 are included in intelligent data smoothing in 365 to generate a 3D digital representation 370 incorporating the at least one object of interest.
  • SLAM 325 of Figure 3 is implemented using method 401.
  • a proper image-capture device (e.g., camera) motion model 400 is initially identified. In some embodiments, such selection is as simple as a model that represents constant (or substantially constant) directional and angular velocity, or in other embodiments, it is more complex.
  • video frames 405 are read one by one. For each new video frame 405, an initial estimation of the camera pose is calculated according to predictions from the camera motion model selected in block 400. Previously detected features are tracked according to visibility constraints and new features are detected, if necessary.
  • each new feature is parameterized using inverse depth.
  • the feature-tracking information combined with the predicted motion in block 415 to allow determination of future locations in block 420. Once these locations are determined, in block 425 the predicted camera pose and 3D structure are refined based on the new observations. These observations are also used to update the camera motion model in block 430. A parallax is calculated for each feature in block 435 according to the updated parameters and if a suitable parallax is observed, the Euclidean representation is used to replace inverse depth parameterization. A semi-global optimization is then applied based on the visibility information to find the maximum likelihood estimation of the camera poses and 3D structure in block 440. This process is repeated until all video frames are determined by block 445 to have been processed, and method 401 provides the initial estimation of structure and motion of block 330 (referring again to Figure 3).
  • the software associated with the image-capture device and/or the hardware into which the image-capture device is integrated is configured to provide the user with interactive feedback with regard to the image-acquisition parameters.
  • interactive feedback provides information regarding the object of interest including whether the tracking is suitable to obtain a plurality of overlapping 2D images necessary to provide suitable images from which 3D digital representations of the object(s) of interest can be generated to provide substantially accurate measurements or other useful information relating to the object.
  • such processing is conducted in the image-capture device itself (e.g., device 182 or device 184 of Figure 1 A) or the hardware in which the device is integrated (e.g., smartphone, wearable device, etc.).
  • the processing is performed "in the cloud" on a server 181 that is in communication with the image-capture device/hardware.
  • the processing is performed on any device (e.g., device 186 of Figure 1A) in communication with the image-capture device and/or hardware.
  • such processing is performed on both the device/hardware and an associated server, where decision-making regarding the location of various parts of the processing may depend on the speed and quality that the user needs results.
  • user feedback is provided in real time, in near real time or on a delayed basis.
  • the user display of the 3D digital representation thereof is configured to provide user generated inputs to facilitate generation of the plurality of overlapping 2D images of the at least one object of interest, the 3D digital representations of the object(s) of interest and/or the extracted measurement values.
  • user generated inputs include, for example, the level of detail, a close-up of a portion of the point cloud/image, optional colorization, a desirable level dimension detail, etc.
  • the software associated with the image-capture devices and methods herein is configured to provide an accuracy value for the 3D digital representations of the object(s). By reporting a level of accuracy (where such accuracy is derivable as set out elsewhere herein), a user will obtain knowledge about accuracy of the extracted measurement or other dimensional value of the at least one object of interest.
  • the software associated with the image-capture devices and/or hardware in which the image-capture device is integrated is configured to elicit and receive from the user a selection of a region/area of interest in a captured image(s) of the object of interest.
  • the software when a scene in which an object of interest is captured, the software elicits and receives selection of a specific object appearing in the scene.
  • the scene presented to the user through a viewfinder or screen on the image-capture device elicits and receives the selection of an object present in the scene such as by touch or other type of method.
  • the object of interest can be identified or selected by a computer or a user. In some embodiments, the identified object is then analyzed in accordance with the methods herein so as to provide an accurate 3D digital representation of the object(s).
  • the methods of the present invention are suitable for use, and are performed, "in the cloud” (i.e., the software executes on server computers connected to the internet and leased on an as-needed basis).
  • the word “cloud” as used in the terms “point cloud” described as part of the invention is independent of, and unrelated to, “cloud computing” as such.)
  • cloud computing has emerged as one optimization of traditional data processing methodologies.
  • a computing cloud is defined as a set of resources (e.g., processing, storage, or other resources) available through a network that can serve at least some traditional datacenter functions for an enterprise.
  • a computing cloud often involves a layer of abstraction such that the applications and users of the computing cloud may not know the specific hardware that the applications are running on, where the hardware is located, and so forth. This allows the computing cloud operator some additional freedom in terms of implementing resources into and out of service, maintenance, and so on.
  • Computing clouds may include public computing clouds, such as Microsoft ® Azure, Amazon ® Web Services, and others, as well as private computing clouds.
  • Communication media appropriate for use in or with the inventions of the present invention may be exemplified by computer-readable instructions, data structures, program modules, or other data stored on non-transient computer-readable media, and may include any information-delivery media.
  • the instructions and data structures stored on the non-transient computer-readable media may be transmitted as a modulated data signal to the computer or server on which the computer-implemented methods of the present invention are executed.
  • a "modulated data signal” may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media may include wired media such as a wired network or direct- wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media.
  • RF radio frequency
  • IR infrared
  • computer-readable media may include both local non-transient storage media and remote non-transient storage media connected to the information processors using communication media such as the internet.
  • Non- transient computer-readable media do not include mere signals or modulated carrier waves, but include the storage media that form the source for such signals.
  • the present invention provides a method for generating a 3D digital representation of an object of interest.
  • This method includes: a) receiving a plurality of 2D digital images of a scene, wherein: the scene includes i) at least one object of interest, wherein the object of interest has a plurality of dimensions; ii) at least a portion of the plurality of the 2D digital images of the scene are overlapping with regard to the at least one object of interest; and iii) the plurality of 2D digital images are generated from a single passive image- capture device; and b) processing at least a portion of the plurality of overlapping 2D digital images that includes the at least one object of interest using a 3D reconstruction process that incorporates a structure-from-motion algorithm, thereby generating a 3D digital representation of the at least one object of interest, wherein measurements of one or more of the plurality of dimensions of the at least one object of interest are obtainable from the 3D digital representation.
  • the present invention provides a method for generating a 3D digital representation of an object of interest.
  • This method includes: receiving a plurality of 2D digital images of a scene, wherein: i) the scene includes at least one object of interest, wherein the object of interest has a plurality of dimensions; ii) at least a portion of the plurality of the 2D digital images of the scene are overlapping with regard to the at least one object of interest; and iii) the plurality of 2D digital images are generated from a single passive image-capture device.
  • the method also includes processing at least a portion of the plurality of overlapping 2D digital images that includes the at least one object of interest using a 3D reconstruction process that incorporates a structure-from-motion algorithm, thereby generating a 3D digital representation of the at least one object of interest; and calculating measurements of one or more of the plurality of dimensions of the at least one object of interest from the 3D digital representation.
  • Some embodiments further include displaying the 3D digital representation of the at least one object of interest.
  • Some embodiments further include calculating the plurality of dimension measurements of the at least one object of interest from the 3D digital representation.
  • the single passive image-capture device is a video camera.
  • Some embodiments further include generating at least one of a 3D model, a 3D point cloud, a 3D line cloud, and a 3D edge cloud from the 3D digital representation, wherein each, independently, includes at least one of the plurality of dimensions of the at least one object of interest.
  • the measurements are obtainable substantially without a separate scaling step.
  • Some embodiments further include selecting one or more of the plurality of dimensions in the at least one object of interest, wherein each of the selected dimensions, independently, includes an actual measurement value; extracting measurement data from the selected dimensions; and processing the extracted measurement data to provide an extracted measurement value for each selected dimension.
  • At least one of the selection steps is automatically performed by a computer. In some embodiments, either or both of the selection steps is elicited and received by a computer from a user. In some such embodiments, a pixel accuracy of each extracted measurement value, independently, is represented in pixel units according to the following formula:
  • the pixel accuracy of each extracted measurement value is about one pixel.
  • each extracted measurement value of each selected dimension is, independently, within about 5% of each corresponding actual measurement value.
  • Some embodiments further include generating boundary information for the at least one object of interest.
  • the present invention provides a computerized method of obtaining at least one measurement of an object of interest
  • This computerized method includes: a) receiving a plurality of 2D images of a scene from a single passive image-capture device, wherein the plurality of 2D images includes image data of at least one object of interest present in the scene, and at least a portion of the plurality of 2D images of the scene are at least partially overlapping with regard to the at least one object of interest, thereby providing a plurality of overlapping 2D images that includes the at least one object of interest; b) generating, by the computer, a 3D representation of the at least one object of interest, wherein the 3D digital representation is obtained from at least a portion of the 2D digital images incorporating the object using a process incorporating a structure-from-motion algorithm; c) eliciting and receiving selections, made by either or both the computer or the user, of one or more dimensions of interest in the at least one object of interest, wherein each dimension, independently,
  • an accuracy of each extracted measurement value, independently, is represented in pixels according to formula:
  • a pixel accuracy of each extracted measurement value is about one pixel.
  • the plurality of 2D images includes video images.
  • Some embodiments further include generating boundary information for the at least one object of interest.
  • the present invention provides a computerized method of boundary detection.
  • this method includes: a) receiving a plurality of 2D digital images of a scene, wherein: i) the scene includes at least one object of interest having a plurality of boundaries; ii) at least a portion of the plurality of 2D digital images is overlapping with regard to the at least one object of interest; iii) the plurality of 2D digital images are generated from single passive image-capture device; and b) processing at least a portion of the plurality of overlapping 2D digital images that include the at least one object of interest using a method that incorporates a structure-from-motion algorithm, thereby providing detected boundary information for at least a portion of the at least one object of interest, wherein the detected boundary information can be represented as at least one of: a 3D digital representation, a 3D model, a 3D point cloud, a 3D line cloud, and a 3D edge cloud, each corresponding to at least a portion of
  • the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
  • FPGAs Programmable Gate Arrays
  • DSPs digital signal processors
  • a signal-bearing medium examples include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc.; and a remote non-transitory storage medium accessed using a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.), for example a server accessed via the internet.
  • a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc.
  • a remote non-transitory storage medium accessed using a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.), for example a server accessed via the
  • a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non- volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors, e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities.
  • a typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
  • any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably coupleable”, to each other to achieve the desired functionality.
  • operably coupleable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
  • FIG. 5A shows a point- cloud image 501 illustrating the dense point-cloud output.
  • FIG. 5B shows the point-cloud image 501 with the line cloud 511 of the room boundary superimposed on the point cloud 501.
  • FIG. 5C shows an image of the floor layout wireframe including line cloud 511 and six labeled edges (A, B, C, D, E, and F) of the line cloud 511.
  • DXF Drawing Exchange Format
  • the measurement error in the floor layout obtained according to the inventive methodology was 0.266% or less as compared to the actual measured value.
  • FIGS. 6A, 6B and 6C illustrate the respective outputs of the dense point-cloud output, the line-cloud output and the wireframe output.
  • the curved wall is represented by a plurality of short straight-line segments in the line cloud 611 that approximate the curve to a suitable accuracy.
  • FIG. 6A shows a point-cloud image 601 illustrating the dense point-cloud output.
  • FIG. 6B shows the point-cloud image 601 with a line cloud 611 of the room boundary superimposed on the point cloud 601. Once the boundaries were identified, a wireframe of the floor layout was generated. This process served to optimize the extraction of the boundary to achieve sub-pixel level accuracy.
  • FIG. 6C shows an image of the floor layout wireframe including line cloud 611 and six labeled edges (A, B, C, D, E, and F) of the line cloud 611.
  • the present invention provides a first method that generates a 3D digital representation of an object of interest.
  • the first method includes: a) receiving, into a computer, a plurality of 2D digital images of a scene, wherein: i) the scene includes a first object of interest, wherein the object of interest has a plurality of dimensions; ii) at least a portion of the plurality of the 2D digital images of the scene are overlapping with regard to the first object of interest; and iii) the plurality of 2D digital images are generated from a single passive image- capture device; b) processing, by the computer, at least a portion of the plurality of overlapping 2D digital images that includes the first object of interest using a 3D reconstruction process that incorporates a structure-from-motion algorithm, thereby generating a 3D digital representation of the first object of interest; and c) generating, using the computer, measurements of a first plurality of the plurality of dimensions of the first object of interest from the 3D digital representation
  • the single passive image-capture device is a video camera.
  • Some embodiments of the first method further include: using the 3D digital representation for generating at least one of a 3D model, a 3D point cloud, a 3D line cloud, and a 3D edge cloud, wherein each, independently, comprises at least one of the plurality of dimensions of the first object of interest.
  • the obtaining of the measurements is performed substantially without a separate scaling operation.
  • Some embodiments of the first method further include: a) selecting at least one of the plurality of dimensions in the first object of interest, wherein each of the selected dimensions, independently, comprises an actual measurement value; b) extracting measurement data from the selected dimensions; and c) processing the extracted measurement data to provide an extracted measurement value for each selected dimension.
  • the selecting of the at least one of the plurality of dimensions is performed automatically by a computer.
  • the selecting of the at least one of the plurality of dimensions includes eliciting and receiving into a computer information that specifies the at least one of the plurality of dimensions from a user.
  • a pixel accuracy of each extracted measurement value is represented in pixel units according to formula:
  • the pixel accuracy of each extracted measurement value is about one pixel.
  • each value of the extracted measurement data of each selected dimension is, independently, within about 5% of each corresponding actual measurement value.
  • Some embodiments of the first method further include: generating boundary information for the first object of interest.
  • the present invention provides a second method that obtains at least one measurement of an object of interest.
  • the second method includes: a) receiving a plurality of 2D images of a scene from a single passive image-capture device, wherein the plurality of 2D images includes image data of a first object of interest present in the scene, and at least a portion of the plurality of 2D images of the scene are at least partially overlapping with regard to the first object of interest, thereby providing a plurality of overlapping 2D images that includes the first object of interest; b) generating, by the computer, a 3D representation of the first object of interest, wherein the 3D digital representation is obtained from at least a portion of the 2D digital images incorporating the first object using a process incorporating a structure - from-motion algorithm; c) eliciting and receiving, from either or both the computer or the user, selection-identification information that identifies a plurality of dimensions of interest in the first object of interest, wherein each dimension, independently, comprises
  • an accuracy of each extracted measurement value, independently, is represented in pixels according to formula:
  • a pixel accuracy of each extracted measurement value is about one pixel.
  • the images in the plurality of 2D images are video images. Some embodiments of the second method further include generating boundary information for the first object of interest.
  • the present invention provides a third method that detects boundaries.
  • the third method includes: a) receiving a plurality of 2D digital images of a scene, wherein: i) the scene includes a first object of interest having a plurality of boundaries, ii) at least a portion of the plurality of 2D digital images is overlapping with regard to the first object of interest, and iii) the plurality of 2D digital images are generated from single passive image- capture device; and b) processing at least a portion of the plurality of overlapping 2D digital images that include the first object of interest using a method that incorporates a structure-from- motion algorithm, thereby providing detected boundary information for at least a portion of the first object of interest, wherein the detected boundary information can be represented as at least one of: i) a 3D digital representation, ii) a 3D model, iii) a 3D point cloud, iv) a 3D line cloud, and v) a 3D edge cloud.
  • the single passive image- capture device is a video camera.
  • the measurements of at least a portion of the first object of interest are obtainable from the detected boundary information.
  • the first method, the second method and the third method are combined and executed as a single process.

Abstract

The inventions herein relate generally to improvements in photogrammetry and devices suitable for obtaining such improvements. Some embodiments use only a single passive image-capture device to obtain overlapping 2D images, where such images at least partially overlap with regard to at least one object of interest in a scene. Such images can be processed using methods incorporating structure from motion algorithms. Accurate 3D digital representations of the at least one object of interest can be obtained. Substantially accurate measurements and other useful information regarding the at least one object of interest are obtainable from the methodology herein.

Description

PHOTOGRAMMETRIC METHODS AND DEVICES RELATED THERETO
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to US Utility Patent Application No. 14/826, 113, filed August 13, 2015, and US Provisional Application No. 62/066,925, filed October 22, 2014, both entitled "Photogrammetric Methods and Devices Related Thereto." This application also claims priority to US Provisional Application No. 62/165,995, filed May 24, 2015, entitled "Indoor Survey Devices and Methods." The disclosures of each of these applications are incorporated in their entirety by this reference.
FIELD OF THE INVENTION
[0002] The inventions herein relate generally to improvements in photogrammetry and devices suitable for obtaining and utilizing such improvements.
BACKGROUND OF THE INVENTION
[0003] Photogrammetry is the science of obtaining measurements from photographs, especially for recovering the exact or nearly-exact positions of surface points. While photogrammetry is emerging as a robust, non-contact technique to obtain measurements of objects, scenes, landscapes, etc., there are limitations to existing methods, some of which, for example, are set forth in the following few paragraphs.
[0004] Accurate three-dimensional (3D) digital representations of objects can be obtained using methods that utilize active- sensing techniques, such as systems that emit structured light, laser beams or the like, record images of objects illuminated by the emitted light, and then determine the 3D measurements from the recorded images. A laser scanner is an example of a standalone device that utilizes structured light to generate measurements of objects. When used in mobile devices, such as smartphones and tablets, emission of the structured light used for 2D and 3D image generation can be achieved by including a separate hardware device as a peripheral. This peripheral is configured to emit, for example, structured light to generate a point cloud (or depth map) from which data about the object of interest can be derived using photogrammetric algorithms. Use of such a peripheral device to provide active sensing methods are provided by, for example, Structure Sensor (see the internet URL structure. io), the DPI-8 kit or the DPI-8SR kit products (see the internet URL www.dotproduct3d.com). While often providing accurate image data, it is nonetheless cumbersome for users to have to add a clamp-on or other type of peripheral equipment to their mobile devices. Alternatively, active sensing means can be integrated into mobile devices, such as in Google's Tango® product.
[0005] Existing passive photo grammetry methods - that is, methods that do not use structured light, lasers or the like but which, for example, utilize images captured by a camera from which to derive measurements, etc. can also be problematic to use. Conventional stereo/2D or 3D cameras typically obtain two images of an object simultaneously from two viewpoints that are typically separated, for example, by the interpupillary distance (IPD) of a person (which can range from about 52 to about 78 mm according to the 1988 Gordan et al. "Anthropometric Survey of US Army Personnel, Methods and Summary Statistics." TR-89-044. Natick MA: U.S. Army Natick Research, Development and Engineering Center). Such stereo images generally have insufficient parallax for high-quality measurement when used to obtain data regarding distant objects {e.g., objects more than a few (about one to about five) meters away from the cameras). To obtain suitable parallax using such methods, the user will be directed to use a template or framework incorporated in, for example, software associated with the image-capture device to guide orientation of the image-capture device relative to the object of interest. This technique can ensure that a sufficient number of appropriately overlapping images of the object of interest are obtained. Alternatively, the user can be provided with general instructions of how to orient the camera and/or object so as to obtain appropriate overlap. Both of these techniques for guiding the user can be used to provide accurate visualization of the object of interest but are nonetheless cumbersome and prone to user error.
[0006] It is possible to obtain accurate measurements from photographs by using multiple images of an object of interest. When placed in a 3D context {i.e., "multiple view geometry"), the three-dimensional points from an object of interest can be estimated from measurements from two or more photographic images taken from different positions. Corresponding points are identified on each image. A line of sight (or ray) can be constructed from the camera location to the point on the object. Triangulation allows determination of the 3D location of the point both in relation to the object's orientation in space, as well as with regard to that point's orientation and/or position in relation to other points.
[0007] Methods for passive photogrammetry where 3D digital representations of the object(s) of interest can be used to derive measurements and other detail of interest are disclosed in U.S. Patent 8,897,539 titled "Using images to create measurements of structures through the videogrammetric process" and PCT Publication No. WO2013/173383 by Brilakis et al. titled "Methods and apparatus for processing image streams," United States Patent 8,855,406 to Lim, et al. titled "Egomotion using assorted features," the disclosures of which are incorporated in their entireties by this reference. Notably, the methodologies disclosed in each of these references require the use of two cameras to capture two-dimensional (2D) images from which a 3D digital representation can thereby be obtained.
[0008] An example of fairly accurate passive photo grammetry that utilizes multiple images generated from a single camera is provided by Photomodeler (photomodeler.com). This software product allows a user to generate a 3D digital representation of an object of interest from multiple overlapping images, where the relevant detail is provided by the orientation of images in a known area of space. In some implementations, accurate measurements can be obtained from the 3D digital representations of the object(s) of interest. However, Photomodeler requires a user to conduct explicit calibration that occurs in a separate step to achieve such accuracy. Once the 3D orientation is obtained, measurement and other detail information regarding the object of interest can be provided for use. At least part of this calibration step comprises users perform a manual boundary identification. This calibration process is time consuming, currently requiring the user to generate a chessboard marker comprising a minimum number of images taken from different angles and distances with respect to the image-capture device, whereby more images will provide more accurate calibration. Moreover, to measure objects of interest that are longer distances from the camera, accurate measurements of the object of interest require larger calibration surface (e.g., about 6 ft. x about 6 ft. (1.82 meters by 1.82 meters)). As might be recognized, this physical calibration step provides the information necessary to orient the object(s) of interest in space so as to make it possible to provide 3D digital representations of the object(s) of interest thereof so that measurements can be obtained.
[0009] Recently issued U.S. Patent No. 8,953,024, the disclosure of which is incorporated herein in its entirety, indicates that 3D digital models of scenes can be generated using a passive digital video camera using, in one implementation, structure-from-motion algorithms. Among other things, there is no disclosure in the '024 patent that sufficient detail about individual objects present in the scene can be obtained to allow specific parameters of such objects to be resolved in order to obtain accurate 3D digital representations suitable to provide measurements or the like.
[0010] In light of these and other issues, there remains a need for improvements in photogrammetry that allow a user to obtain accurate 3D digital representations of an object of interest (or a collection of objects of interest) without the need for use of two camera image acquisition and/or the use of cumbersome processing steps. Still further, it would be desirable to have method and devices to obtain accurate 3D digital representations of the object(s) using a single image-capture device, such as those integrated into mobile devices (e.g., smart phones, tablets, etc.). Yet further, it would be desirable to be able to obtain substantially accurate measurements of object(s) of interest in a scene. Still further, it would be desirable to be able to extract information about a scene or location or objects in a scene or location so as to provide survey-quality information for use. The present invention provides this and other benefits.
SUMMARY OF THE INVENTION
[0011] In one embodiment, the invention provides a method for generating 3D digital representations of an object of interest using an image-capture device. An exemplary method comprises receiving a plurality of 2D digital images of a scene, where at least one object of interest is present in the scene. The 2D digital images will at least partially overlap with regard to the object of interest. In order to generate the 3D digital representation of the object of interest, at least some of the 2D digital overlapping images of the object are processed using methodology that incorporates a structure-from-motion algorithm. The 3D digital
representations obtained in accordance with the invention are suitable for generating one or more of a 3D model, a 3D point cloud, a 3D line cloud or a 3D edge cloud, wherein each, independently, corresponds to one or more dimensions in the object. Further, the 3D digital representation, and any data or other information obtainable therefrom, is accurate in relation to the dimensions of the actual object, which allows substantially accurate measurements of one or more dimensions of the object to be obtained.
[0012] In a further embodiment, the invention provides a method of detecting boundaries in at least one object of interest in a scene. In this regard, overlapping 2D digital images of an object of interest in a scene are generated. Boundary detection information regarding the object is generated from a process that incorporates a structure-from-motion algorithm. With respect to the object of interest in the scene, the boundary detection information can be used provide measurements, 3D digital representations, 3D point clouds, 3D line clouds, 3D edge clouds and the like.
[0013] The overlapping 2D digital images used in the present invention can be obtained from a single image-capture device. Still further, the single image-capture device is a video camera. The 2D digital images can be generated by an image-capture device that comprises a passive sensing technique. Yet further, the 2D digital images can be generated by an image - capture device that consists essentially of a passive sensing technique. The image-capture devices can be integrated into a device such as a smartphone, tablet or wearable device or the image-capture devices can be as stand-alone camera device. The image-capture device can also be incorporated in a specialized measurement device. Accordingly, the present invention relates to one or more devices that incorporate the methods herein.
[0014] In further embodiments, the present invention relates to devices and methods for generating surveys of interior and exterior scenes or objects in the scenes using image capture devices associated with image processing techniques suitable to allow survey information to be returned quickly to a user and where such survey information can optionally be further processed. Such surveys and their included survey information relating to interior and exterior scenes can be used in applications such as construction/remodeling estimation, 3D model generation, insurance policy underwriting and adjusting, interior and exterior design efforts, real estate marketing, inventory management and other areas where it can be desirable to obtain information about features and dimensions of one or more features or objects present in the scene.
[0015] Additional advantages of the invention will be set forth in part in the description that follows, and in part will be apparent from the description, or may be learned by practice of the invention. The advantages of the invention will be realized and attained by means of the elements and combination particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] FIG. 1A is a block diagram of a system 101 according to some embodiments of the present invention.
[0017] FIG. IB is a flowchart of a method 102 illustrating an exemplary method to obtain 3D digital representations of an object of interest according to the methodology herein.
[0018] FIG. 2 is a flowchart of a method 201 illustrating an exemplary method to perform the structure-recovery portion 125 of the process of FIG. 1.
[0019] FIG. 3 is a flowchart of a method 301 illustrating an exemplary methodology for use in navigation applications for robots and the like.
[0020] FIG. 4 is a flowchart of a method 401 illustrating an exemplary method to perform a simultaneous-localization-and-mapping (SLAM) portion 325 of method 301 of FIG. 3.
[0021] FIGS. 5 A, 5B and 5C are images that illustrate various steps in obtaining
measurements of a small office using the inventive methodology.
[0022] FIGS. 6A, 6B and 6C are images that illustrate various steps in obtaining
measurements of a room having a curved wall using the inventive methodology.
DETAILED DESCRIPTION OF THE INVENTION
[0023] Many aspects of the disclosure can be better understood with reference to the Figures presented herewith. The Figures are intended to illustrate the various features of the present disclosure. Moreover, like references in the drawings designate corresponding parts among the several views. While several implementations may be described in connection with the included drawings, there is no intent to limit the disclosure to the implementations disclosed herein. To the contrary, the intent is to cover all alternatives, modifications, and equivalents.
[0024] The term "substantially" is meant to permit deviations from the descriptive term that do not negatively impact the intended purpose. All descriptive terms used herein are implicitly understood to be modified by the word "substantially," even if the descriptive term is not explicitly modified by the word "substantially."
[0025] In one embodiment, the invention provides a method for generating 3D digital representations of an object of interest in a scene from an image-capture device. An exemplary method comprises receiving a plurality of 2D digital images of the scene, where at least one object of interest is present in the scene. The 2D digital images will at least partially overlap with regard to the object of interest. In order to generate the 3D digital representation of the object of interest, at least some of the 2D digital overlapping images of the object are processed using methodology that incorporates a structure-from-motion algorithm. The 3D digital representations obtained in accordance with the invention are suitable for generating one or more of a 3D model, a 3D point cloud, a 3D line cloud or a 3D edge cloud, wherein each, independently, corresponds to one or more dimensions in the object. Further, the 3D digital representations, and any data or other information obtainable therefrom, is accurate in relation to the dimensions of the actual object, which allows accurate measurements of one or more dimensions of the object to be obtained.
[0026] As used herein, "overlapping images" means individual images that each, independently, include at least one object of interest, where such images overlap each other as to one or more dimensions of the object of interest are concerned. "Overlapping" in relation to the invention herein is described in further detail hereinbelow.
[0027] As used herein, an "object of interest" encompasses a wide variety of objects such as, for example, structures, parts of structures, landscapes, vehicles, people, animals and the like. Indeed, "object of interest" can be anything from which a 2D image can be obtained and that from which information suitable for generation of accurate 3D digital representations of such objects can be obtained according to the methodology herein. The at least one object of interest can have multiple dimensions, such as linear or spatial dimensions, some or all of which may be of interest, such as to provide measurements or other useful information. Further, the
methodology herein can be utilized to generate accurate 3D digital representations of more than one object of interest in a scene, such as a collection of smaller objects (e.g., doors, windows, etc.) associated with a larger object (e.g., the overall dimensions of a building) where such collection of smaller and larger objects are present in the plurality of overlapping 2D images in a scene.
[0028] The at least one object of interest, for example, can be a roof on a structure that is present in a scene that includes the structure, landscaping and other objects. The length of the roof on a front side of the structure (such as in meters or feet, etc.) could be at least one dimension of interest. Alternatively, each of the dimensions of the roof (such as length on the back, front and sides of the structure and the pitch) could comprise a plurality of dimensions of interest. As would be recognized, these one or plurality of dimensions/features will have an actual measurement value that will be obtainable when a physical measurement of the length, depth, etc., is conducted, such as by a linear measurement tool or an electronic distance measurement tool.
[0029] The overlapping 2D digital images used in the present invention can be obtained from a single image-capture device. Still further, the single image-capture device is a video camera. The 2D digital images can be generated by an image-capture device that comprises a passive sensing technique. Yet further, the 2D digital images can be generated by an image- capture device that consists essentially of a passive sensing technique. The image-capture devices can be integrated into a device such as a smartphone, tablet or wearable device or the image-capture devices can be as stand-alone camera device. The image-capture device can also be incorporated in a specialized measurement device. Accordingly, the present invention relates to one or more devices that incorporate the methods herein. [0030] In accordance with the methods herein, an extracted measurement value of the one or a plurality of dimensions in the object of interest and other useful information, such as boundary detection information as discussed herein below, can be obtained from using a single passive image-capture device, such as that integrated into a smartphone, tablet, wearable device, digital camera (for example, digital cameras on drones) or the like.
[0031] When the plurality of overlapping 2D images is derived from a video-image-capture device, the images will be overlapping. As used herein, "video" means generally that the images are taken, for example, as single frames in quick succession for playback to provide the illusion of motion to a viewer. In some aspects, video suitable for use in the present invention comprises at least about 24 frames per second ("fps"), or at least about 28 fps or at least about 30 fps (frames per second) or any suitable fps as appropriate in a specific context.
[0032] As used herein, "image-capture-device calibration" is the process of determining internal image-capture-device parameters (e.g., focal length, skew, principal point, and lens distortion) from a plurality of images taken of an object with known dimensions (e.g., a planar surface with a chessboard pattern). Image-capture-device calibration is used for relating image- capture device measurements with measurements in the real "3D" world. Objects in the real world are not only three-dimensional, they are also physical spaces with physical units. Hence, the relation between the image-capture device's natural units (pixels) and the units of the physical world (e.g., meters) can be a significant component in any attempt to reconstruct a 3D scene and/or an object incorporated therein. A "calibrated image-capture device" is an image- capture device that has undergone a calibration process. Similarly, an "uncalibrated image- capture device" is an image-capture device that has not been put through a calibration process, in that no information or substantially no information regarding the internal image-capture device parameters is provided and substantially the only available information about the images is presented in the image/video frame itself. In some embodiments, the present invention incorporates a calibrated image-capture device. In other embodiments, the present invention incorporates an uncalibrated image-capture device. In some embodiments, the present invention extracts metadata (such as EXIF tags) that includes camera-lens data, focal length data, time data, and/or GPS data, and uses that additional data to further process the images into point- edge-cloud data.
[0033] In accordance with some aspects of the invention herein, use of a plurality of 2D overlapping images derived from video greatly improves the ease and quality of user capture of the plurality of 2D images that can be processed to provide accurate 3D digital representations of the at least one object of interest, for example, such as to generate substantially accurate measurements of the object. As one example of this improvement, the sequential nature of video has been found by the inventors herein to improve 3D digital representation quality due to an attendant reduction in the errors associated with a user needing to obtain proper overlap of the plurality of overlapping 2D images so that detailed information about the object of interest can be derived. Another advantage of the present invention is the shortened time needed to obtain the overlapping 2D images used in the present invention to create detailed information about the object of interest such that an accurate 3D digital representation can be obtained for use. Still further, the inventors herein have found that use of video as the source of the plurality of overlapping 2D images can allow tracking of points that are inside (i.e., tracking points within the boundaries of the images) or outside of the images of the object of interest (i.e., continuing to track points that are first "followed" when in the image frame, and then tracking estimated positions of those points no longer in the images intermediate in time (the points have moved outside the boundaries of the images), so that when those points are in the field of view of later image frames, the later-followed points can be substantially correlated to those same features in the earlier image frames), where such point tracking provides improvements in the 2D-image data used to generate the 3D digital representations the at least one object of interest in a scene. In turn, it has been found that the quality of the 3D digital representations of the object(s) of interest herein can be improved.
[0034] While the present invention is particularly suitable for use with image-capture devices that generate a video from which overlapping 2D images can be provided, the present invention is not limited to the use of video. That is, the plurality of overlapping 2D images can suitably be provided by an image-capture device that provides 2D still images, such as a "point and shoot" digital camera. When using such a digital still camera, the at least two overlapping images can be obtained from images that comprise a suitable parallax between and amongst the images to allow generation of information from which an accurate 3D digital representations of the object(s) can be obtained.
[0035] As would be recognized, a plurality of still 2D images taken in sequence can also be defined as "video" if played back at a speed that allows the perception of motion. Therefore, in some aspects, the plurality of overlapping 2D images can be derived from a plurality of digital still images and/or from video without affecting the substance of the present invention, as long as the plurality of overlapping 2D images that include an object of interest can be suitably processed to generate detailed information from which the accurate 3D digital representations of the object(s) of interest can be generated.
[0036] The overlapping 2D images of a scene will include at least a portion of the at least one object of interest. In accordance with the invention, at least a portion of the overlapping 2D images of the scene will also be overlapping with regard to the at least one object of interest.
[0037] In some aspects, the plurality of overlapping 2D images includes at least two (2) suitably overlapping 2D images, where the overlap is in relation to the at least one object of interest. In other embodiments, the plurality of overlapping 2D images includes at least 5, at least 10, or at least 15 or at least 20 suitably overlapping 2D images, where the overlap is in relation to the at least one object of interest. As would be recognized, the number of
overlapping 2D images needed to generate an accurate 3D digital representations of the object(s) of interest in a scene will depend, in part, on factors such as the size, texture, illumination and potential occlusions of the object of interest, as well as the distance of the object of interest from the image-capture device.
[0038] As noted, sequential images extracted from video will possess overlap. The overlap present in sequential images generated from video will depend, in part, on the speed at which the user moves the image-capture device around the at least one object of interest and the orientation of the image-capture device in space with reference to the object of interest.
[0039] When the image-capture device is a still digital camera, the 2D images can be made suitably overlapping with regard to the at least one object of interest using one or more methods known to one of ordinary skill in the art, such as, in some embodiments, the camera operator taking the successive still images including the at least one object of interest while changing the angular orientation, the linear location, the distance, or a combination thereof in a manner that has the object of interest in each successive image captured. In this regard, the plurality of overlapping 2D images are suitably processable to allow accurate 3D digital representations of the at least one object of interest to be derived therefrom.
[0040] To provide suitably overlapping 2D images incorporating the at least one object of interest from sources other than video, the individual images can be overlapped, where such overlap is, in reference to the at least one object of interest, at least about 50% or at least about 60% or at least about 70% or at least about 80% or at least about 90%. In some embodiments, the amount of overlap in the individual images in the plurality of overlapping 2D images, as well as the total number of images needed to provide an accurate digital representation of the object of interest, will also depend, in part, on the relevant features of the object(s). In some embodiments, such relevant features include, for example, the amount of randomness in the object shape, the texture of and size of the at least one object of interest relative to the image- capture device, as well as the complexity and other features of the overall scene.
[0041] In a further embodiment, the present invention comprises image-capture devices comprising passive sensing techniques and methods relating thereto utilizing a plurality of overlapping 2D images suitable for generating accurate 3D digital representations the at least one object of interest in a scene. The inventors herein have found that accurate 3D digital representations of the object(s) present in a scene can be obtained using a plurality of overlapping 2D images incorporating the object(s) substantially without the use of an active sensor/signal source, such as a laser scanner or the like. As would be understood by one of ordinary skill in the art, "passive-image-capture devices" means that substantially no active signal source such as a laser or structured light (as opposed to camera flash or general- illumination devices) or sound or other reflective or responsive signal is utilized to measure or otherwise sense at least one object of interest so as to provide the information needed to generate the accurate 3D digital representations of the at least one object of interest present in a scene.
[0042] As used herein, "accurate" in relation to the 3D digital representations of the at least one object of interest comprises, in part, data or other information from which substantially accurate measurements of the object(s) can be obtained as defined elsewhere herein.
[0043] In some embodiments, the present invention further includes passive
photogrammetry techniques where the images are obtained from a single image-capture device. Yet further, in some embodiments, no more than one passive image-capture device is used in accordance with the methods herein. This use of images from only a single image-capture device is in contrast to the traditional use of at least two cameras or one or more projectors used to obtain 3D digital representations of the at least one object of interest passive sensing methods as disclosed, for example, in US Patent Publication No. US2013/0083990 and PCT Publication No. WO2013/173383, which are each incorporated by reference as set forth previously. In particular, and as would be recognized by one of ordinary skill in the art, prior-art passive image-capture devices used to generate 3D digital representations of the at least one object of interest utilize at least two cameras (or projectors) displaced in a direction away from one another (e.g., horizontally) so as to obtain at least two differing views of a scene and any objects included therein. By comparing these at-least-two images obtained from two image-capture devices, the relative depth information of the scene and/or objects present therein can be obtained for display to a viewer with or without processing of the image there between. [0044] Yet further, prior art methods perform poorly if the motion between two frames is too small or limited. In contrast, the methodology herein leverages such small or limited motions and creates improved results such situations.
[0045] In further embodiments, the present invention includes methods of using mobile devices configured with passive image-acquisition capability suitable to provide accurate 3D digital representations of the at least one object of interest in a scene. In some embodiments, the methodology herein can be utilized to provide measurements and other useful information regarding the object(s). Yet further, the present invention includes methods of using mobile devices configured with passive image-acquisition technology, whereby substantially accurate measurements of one or more dimensions of the objects can be obtained.
[0046] In some embodiments, point clouds that incorporate information regarding the at least one object of interest are generated using conventional methods. As used herein, a "point cloud" is a set of data points in the same coordinate system. In a three-dimensional coordinate system, these points are usually defined by X, Y, and Z coordinates. In other embodiments, the inventors herein have found that inventive point clouds can be obtained, where such inventive point clouds further include additional data representative of edge information in the object of interest. Yet further, one or more of point clouds, edge clouds and line clouds are obtainable according to the methodology herein, wherein each of these aspects can include data or other information from which measurements or other useful information about the at least one object of interest can be generated. "Edge cloud" is a set of edge points in the same coordinate system and represented by X, Y and Z and comprises one or more discontinuities in depth, surface, orientation, reflection, or illumination. "Line cloud" is a set of 3D straight lines in the same coordinate system. Each line can be defined using its two end points or Plucker coordinates.
[0047] Unlike prior art methodologies that utilize point- level data to generate 3D digital representations, in some aspects, the present invention can, in some circumstances, be characterized as "hybrid" in nature in that it is possible to utilize any combination of points, edges, and lines (point+edge+line, point+edge, point+line, edge+line, etc.). As a result, while the prior art can only produce point clouds, with the invention herein it is possible to create point clouds, line clouds, and edge clouds and any combination thereof. Moreover, the prior art solutions only produce a point cloud with an unknown scale. Therefore, 3D measurements cannot be extracted directly from the point cloud. The point cloud has to be scaled first. In the present invention, 3D measurements can, in some embodiments, be extracted directly from the point cloud, line cloud and/or edge cloud data substantially without the need for a scaling step. [0048] In another embodiment, the accurate 3D digital representations of the at least one object of interest are generated by processing overlapping 2D-image data generated from one or more discontinuities in depth, surface, orientation, reflection, or illumination, wherein such image data is derived from a plurality of overlapping 2D images of an object of interest.
[0049] In one embodiment, a suitable methodology (albeit where two image-capture devices are required to provide dimensions of the object of interest) that can be used for structure recovery is described in US Patent Publication No. 2013/0083990, previously incorporated by reference.
[0050] The methodology herein, in some embodiments, utilizes data or other information extracted from a plurality of overlapping 2D images to create a robust data set for image processing wherein a plurality of lines, edges and points included therein are specific to lines, edges and points corresponding to the at least one object of interest as incorporated in the plurality of 2D overlapping images of the object. In some contexts, it has been found that the inventive methodology can provide one or more of the following improvements over the prior art: 1) the edge-detection method substantially filters out useless data, noise, and frequencies while preserving the important structural properties of the at least one object of interest; 2) the amount of data needed to provide an accurate 3D digital representation of an object is reduced, as is the need for attendant data processing; and 3) the necessary information needed for object detection and segmentation (i.e., object boundaries) is provided which is an unmet need in Building Information Modeling (BIM). As used herein, "BIM" means an object-oriented building-development tool that utilizes modeling concepts, information technology and software interoperability to design, construct and operate a building project, as well as communicate its details. Further improvements are found in the present invention are found from the
substantially simultaneous processing of point, line and edge data. Prior art solutions only produce a dense 3D point cloud at the beginning, and in a successive step they extract edge points from the generated 3D dense point cloud.
[0051] In some embodiments, the 2D digital images suitable for use in the present invention may be missing some or all of the information stored in EXIF tags. This can allow images other than JPEG images to be used as input data in the present invention.
[0052] In a further embodiment, the invention provides a method of detecting boundaries in an object of interest in a scene. In this regard, overlapping 2D digital images of an object in a scene are generated. Boundary detection information regarding the object is generated from a process that incorporates a structure-from- motion algorithm. With respect to the at least one object of interest in the scene, the boundary detection information can be used to generate measurements, 3D digital representations, 3D point clouds, 3D line clouds, 3D edge clouds and the like.
[0053] A "boundary" is a contour in the image plane that represents a change in pixel ownership from one object surface to another. "Boundary pixels" mark the transition from one relatively constant region to another, where the constant region can comprise one or more of an object of interest or a scene in which the object appears in the image. Boundary detection is a computer vision problem with broad applicability in areas such as feature extraction, contour grouping, symmetry detection, segmentation of image regions, object recognition, categorization and the like. Detecting boundaries is significantly different from simple edge detection, where "edge detection" is a low-level technique to detect an abrupt change in some image feature, such as brightness or color. In contrast, boundary detection relates to the detection of more global properties, such as texture and, therefore, involves integration of information across an image. So, for example, a heavily textured region might give rise to many edges, but to suitably provide information suitable to generate a 3D digital representation of an object of interest therefrom, there should be substantially no boundary defined within the textured region. Moreover, accurate boundary detection is needed to resolve discontinuities in depth that allow accurate rendering of 3D digital representations.
[0054] In some embodiments, information needed to generate accurate 3D digital representations of the at least one object of interest in a scene can be determined using a "structure-from-motion" algorithm. As would be recognized, a structure-from-motion algorithm can be used to extract 3D geometry information from a plurality of overlapping images of an object or a scene. In accordance with the present invention, information needed to provide accurate 3D digital representations of the object(s) of interest of the object can be generated from a process that incorporates a structure-from-motion algorithm that estimates camera positions for each image frame in the plurality of overlapping images. As would be recognized, many structure-from-motion algorithms incorporate key-point detection and matching, so as to form consistent matching tracks and allowing the solving for camera parameters.
[0055] The inventors herein have developed improvements in the ability to analyze the plurality of 2D overlapping images of at least one object of interest, where such improvements, in some embodiments, assist in the generation of 3D digital representations of the object. In this regard, an inventive methodology comprises parameterizing a line with two end-points. This parameterization step provides two advantages over existing line- or point-based 3D reconstruction methodologies, such as those provided by prior art structure-from-motion algorithms, because the inventive methods are able to achieve the following.
[0056] First, a duality is created between points and lines that is preserved by: a) visual triangulation for calculating 3D coordinates of features (point and lines) and b) reprojecting 3D features into the 2D-image plane. This duality allows interchanging the role of points and lines the mathematical formulations whenever appropriate.
[0057] Second, a parameterization step facilitates modeling of lens-distortion parameters even when substantially only line level information is present. Due to deviations from rectilinear projection caused by lens distortion, straight lines in a scene are typically transformed into curves in the image of the scene. Existing line-based 3D-reconstruction algorithms assume that the input data (images or video frames) are already undistorted; this necessitates use of pre- calibrated cameras in prior-art methods. In some embodiments, substantially no such assumption is made in the present invention. As such, uncalibrated cameras are particularly suitable for use in the present invention.
[0058] Further in regards to the parameterization, the present invention allows reprojection errors to be calculated with a weighing function that substantially does not over- or
underestimate the contribution of line and/or edge points to the total reprojection error cost function.
[0059] In some embodiments, if ls = us , ) a n d le = \ue , ve ) denote the distorted 2D coordinates of two end points of a line segment in an image,
Ls = ( Xs , Ys , ZS )T and Le = ( Xe , Ye , Ze )T denote their corresponding 3D points and P3x4 is the camera-projection matrix.
In some embodiments, an information processor that uses the present invention performs the following:
1) Calculates
/; = ( . r and /; = ( /; . : ) which are the undistorted coordinates of the two end points. 2) Locates a 3D point on the infinite 3D line that is connecting the two end points ( Ps ) using , L , Le , and P"A ax =LP0& - PM x P 1 - P2i x PK - P22 x r - .
/. - /.
h =
k = al x b f,
t = + a, x t
3) Locates a 3D point on the infinite 3D line that is connecting the two end points ( Pe ) using
4, 4,, , and /y¾ 1 following a similar process presented in the previous step.
4) Projects Ps and Pe into the 2D-image plane using P3x4 to get ps and /?e .
5) Finds /¾ which is a normalized homogeneous line that connects ps and pe .
6) Calculates the reprojection error. β = ο.5 χ ( ;/ |( ,Α)|) where represents the absolute value and ( ) represents the inner product
[0060] Figure 1A is a block diagram of a system 101 according to some embodiments of the present invention. In some embodiments, system 101 includes one or more cloud-computing servers 181 each connected to the internet 180. In some embodiments, a non-transitory computer-readable storage medium 183 has photogrammetry instructions and data structures of the present invention stored thereon. In some embodiments, the methods of the present invention execute on cloud-computing server(s) 181 using the photogrammetry instructions and data structures from computer-readable storage medium 183, wherein a user 98 uploads images from still camera 182 and/or video camera 184 into cloud-computing server(s) 181, either directly (e.g., using the cell-phone or other wireless network, or through a conventional personal computer 186 connected to the internet). In other embodiments, photogrammetry instructions and data structures of the present invention are transmitted from computer-readable storage medium 183 into local non-transitory computer-readable storage media 187 (such as rotating optical media (e.g., CDROMs or DVDs) or solid-state memory devices (such as SDHC (secure data high-capacity) FLASH devices), which are connected to, plugged into, and/or built into cameras 182 or 184 or conventional personal computers 186 to convert such devices from generic information processors into special-purpose systems that convert image data into photogrammetry data according to the present invention. In some embodiments, system 101 omits one or more of the devices shown and still executes the methods of the present invention.
[0061] Figure IB presents a flowchart of a method 102 illustrating one aspect of the present invention. In Figure IB, Figure 2, Figure 3, and Figure 4, rectangular boxes represent a function and ovals are the inputs/outputs. In block 100 a plurality of overlapping images are received by. These images can be derived from a still image-capture device 182 or a video image-capture device 184 as discussed elsewhere herein. In some embodiments, feature lines are detected and matched/tracked in block 105 and block 110, respectively. The output of the detection and matching process of block 110 and block 105 are corresponding lines of block 115 and corresponding points of block 120. In some embodiments, in block 125, methods such as linear methods, are used for structure recovery processes, such as those presented in more detail in reference to Figure 2. In some embodiments, an initial estimation of structure and motion data in block 130 is determined based on the structure recovery of block 125. In some embodiments, hybrid bundle adjustment techniques in block 135 are used to further refine/optimize the 3D structure and motion data 140 and are used in process 145 to generate a 3D point, line and/or edge cloud 150 representative of the at least one object of interest. In some embodiments, 3D structure and motion data 140 are used in a 3D plane detection process 155 to detect 3D planes 160. In some embodiments, the 3D point , line and/or edge cloud of block 150 and 3D plane of block 160 are included in intelligent data smoothing in block 165 to generate a 3D digital representation of block 170 incorporating the at least one object of interest.
[0062] Figure 2 is a flowchart of a method 201 illustrating an exemplary method to perform the structure-recovery portion 125 of the process of Figure IB. In comparison to previous methodologies using passive image-capture devices, in some embodiments, the present invention provides notable benefits relating to the ability to utilize a single image capture device to generate the plurality of overlapping images. In this regard, Figure 2 illustrates such benefits in relation to structure recovery of block 125 called out from Figure 1. In block 200 pairwise epipolar geometries are computed, and used in block 205 to build a graph of epipolar geometries. In some embodiments, from the data from which the graph of block 205 is created, the confidence level for each epipolar geometry is calculated in 210. Where the calculated epipolar geometries are determined to meet the desired confidence level (such as 90% or 95% or 99% confidence level), a connectivity graph is built in block 215. In block 220, the relative rotations of the various points on the connectivity graph of block 215 are estimated, followed by calculation of global rotations in block 225. In block 230, the relative translation and scaling factor for the resulting data is determined, whereby the data generated in method 201 of Figure 2 is used to provide an initial estimation of structure and motion 130 for further application to the process set out in Figure 1.
[0063] Further in regard to block 225 wherein the initial estimate of the relative rotation for each pair of images or video frames (those which resulted in confident epipolar geometries) are estimated, in some embodiments, the global rotation for each view is calculated by the following methodology:
1) Build matrix A (wherein A is a (3m)x(3m) matrix and m is the total number of images or video frames)
If the relative rotation between view and j is denoted by ¾ , then for int 1=0; i<m-l; i++ { for int j=i+l; <m; j++ {
00
as an example to clarify matrix indices: Av , = 1 and A} -R
}
}
2) Singular Value Decomposition (SVD) of A
A = U∑V*
[0064] With regard to block 230 where the global rotation is calculated, the following methodology is used.
1) Calculate global rotation for view from V* matrix
V3i+0 m-l 1 3/÷l,3i«-l V3 ÷2,3»;-l
& = V3 ÷0,3;w-2 V3i+l,3m-2 V3i+2,3m-2
V3/+0,3»/-3 V3i+l,3m-3 V3?+2,3w-3
2) Find the closest orthogonal matrix to R; .
[0065] Once the scale ambiguity of, for example, a 3D point cloud is resolved, the spatial distance between each point pair in the point cloud will represent the distance between the corresponding physical points in the actual scene. In some embodiments, this is leveraged to extract a wide variety of dimensions and measurements from the point cloud. In some embodiments, the obtained knowledge about corner points, edge/boundary points, blobs, ridges, straight lines, curved boundaries, planar surfaces, curved surfaces, and other primitive geometry elements can provide the capability to identify significant parts of the scene and automatically extract corresponding measurements (length, area, volume, etc.). In some embodiments, the 2D locations of these primitive geometries are first detected in images or video frames. The image- based coordinates are then converted into 3D coordinates via the calculated camera matrices.
[0066] As mentioned, the present invention provides accurate 3D digital representations of at least one object in a scene. In one embodiment, the level of accuracy of the 3D digital representations of the object(s) of interest is with reference to one or more of the actual dimensions of the object of interest. In this regard, at least one object of interest is identified, selected or otherwise specified, where the identification, etc., can include identification of at least one dimension of interest in the object, or such identification, etc., may include a plurality of dimensions of interest where each of these dimensions, independently, includes an actual value. As discussed elsewhere herein, the identification, etc., of the at least one object of interest and/or the one or more dimensions in the object(s) can be by either or both of a computer or a user.
[0067] In some embodiments, the accuracy of the measurements obtained according to the invention herein can be characterized in relation to a specified number of pixels. The methodology herein allows a user to obtain measurements of one or more dimensions of the object of interest of up to and including a 1.0 pixel standard deviation or, in other embodiments, a 0.5 pixel standard deviation is provided. As would be recognized, pixel size is an aspect of the image-capture device specifications and the distance of the image-capture device from the object of interest. This is illustrated in Table 2 hereinbelow.
[0068] In some embodiments, accuracy in pixels relative to the actual dimensions of the object of interest is represented according to the following formula:
Pixel size in object = (distance of object of interest from IC device) *
(IC device sensor size)/((IC device resolution*IC device focal length))
[0069] The IC ("image capture") device sensor size, resolution and focal length are features or characteristics of each image-capture device. For example, the below Table 1 sets out some representative specifications for existing image-capture devices: TABLE 1
[0070] The calculations in Table 2 (and Table 3) assume that there is no scaling error. As would be recognized, scaling error will decrease the accuracy of the measurement derived from the image-capture device vs. the actual measurement of the one or more dimensions of the object of interest.
TABLE 2
[0071] It should also be noted that the stated pixel errors are in relation to the general capabilities of image-capture devices available in the market currently. In the future, image- capture devices will be available with higher resolutions that will allow attendant improvements in the accuracy of the inventive methods. Such higher-resolution devices will provide sharper (e.g. , less blurred) and less noisy images which will, in turn, result in less pixelization effect and hence the pixel noise will be decreased to allow smaller pixel errors. Still further, improvements in image-capture devices will result in reduced lens distortion and improvements in the squareness of pixels (i.e. , without skews). Improvements in image sensors, as well as other relevant sensors, will also be obtained. Prospective improvements include, but are not limited to, higher focal length and smaller sensor size. Any such improvements individually or in combination will result in attendant improvements in the data available from 2D images and, therefore more accurate measurements will be obtainable using the methodologies herein. Such improved image-capture devices and the resulting data therefrom are contemplated for use with the inventive methods.
[0072] In some embodiments, accuracy of the measurements derived from the image- capture device is also represented in percent error. The methodology herein enables measurements to be derived from the image-capture device having accuracy within, in some embodiments, about 5% or in other embodiments, about 10% or in still other embodiments, about 20% error relative to the actual measurement value of the object of interest. In some embodiments, this error is calculated from the following formula:
Error % = (((distance of object from IC device)/2)* object size) *
((IC device sensor size) / (IC resolution * IC device focal length) * 100%)
[0073] Representative % error calculations are presented in Table 3
TABLE 3
[0074] From Table 3, it is apparent that error in the measurement will be relative to the size of the object of interest (or specific dimensions of interest within the object), with the measurement derived for smaller objects being more accurate relative to the actual dimensions of the object when the image-capture device is closer to the object.
[0075] As an example of measurement accuracy attainable with the inventive methodology herein, an "estimation level of accuracy" may be appropriate when only approximate
measurements are required to determine the amount of materials needed for a project. In some embodiments, such "estimation levels of accuracy" are equal to or less than about 20%, or in other embodiments, about 15% or in yet other embodiments, about 10% or more than about 5% of the actual dimensions of the at least one object of interest. To illustrate, an extracted measurement value of the at least one object of interest that is one-hundred ten (110) inches (279.4 cm) is within an "estimation level of accuracy" when the actual measurement of the at least one object of interest is 100 inches (254 cm), such that the error is 10%. Situations where such "estimation level of accuracy" would be valuable, for example, are to estimate the materials needed to for carpet, wallpaper, paint, sod, roofing and the like.
[0076] In some circumstances, a better than "estimation level of accuracy" will be appropriate. Such cases will call for a "fabrication level of accuracy." In some embodiments, such "fabrication level of accuracy" means that the extracted measurement value is less than about 5%, or in other embodiments, less than about 3% or in still other embodiments, less than about 2% or less than about 1% of the actual dimensions of the at least one object of interest. Situations where such "fabrication level of accuracy" would be appropriate include, for example, measurements used to manufacture custom cabinets, off-site preparation of construction details (trim), identification of exact dimensions of componentry (e.g., space available for appliances, BIM) and the like.
[0077] In some embodiments, software associated with the methods and devices of the present invention is configured to provide information regarding the error in the measurement presented. For example, in some embodiments, when the measurement of an object is reported to the user as 10 feet (3.048 meters) along one dimension, information about any error in such measurement pixel accuracy or % error is provided as set out elsewhere herein.
[0078] In a further embodiment, in some embodiments, the 3D digital representations of the at least one object of interest are derived from the plurality of overlapping 2D images
incorporating the object(s) substantially without need for manual steps to extract measurements, such as by providing manual manipulation to extract the data necessary to generate the 3D digital representations. Still further, in some embodiments, the measurements can be obtained substantially without need for a separate scaling step, such as that required to obtain measurements of objects with the Photomodeler product, for example.
[0079] In one embodiment, an image-capture device can be integrated into a mobile device to allow images of the at least one object of interest to be obtained. Software either included in or associated with the mobile device can be suitably configured to allow the 2D-image processing, data generation, generation of the 3D digital representation of the object(s) to substantially occur on the mobile device using software and hardware associated with the device. Such software, etc., can also be configured to present to the user a measurement of one or more dimensions of the object of interest or to store such measurement for use.
[0080] In one embodiment, measurements of the at least one object of interest can be obtained using a marker as a reference. For example, a ruler or other standard sized object can be incorporated in a scene that includes the at least one object of interest. Using the known dimensions of the marker, one or more dimensions of the object can be derived using known methods.
[0081] In another embodiment, measurements of the at least one object of interest can be obtained without use of, or in addition to, a marker. In this regard, the invention utilizes an internal or "intrinsic" reference. With this intrinsic reference, the invention herein allows a user to generate substantially accurate measurements of the at least one object of interest. In particular, such substantially accurate measurements are provided, in some aspects, by incorporation of the intrinsic reference into the software instructions associated with the image- capture device and/or any hardware into which the device is associated. In separate aspects, the intrinsic reference comprises one or more of: i) dimensions generated from at least two focal lengths associated with the image-capture device; ii) a library of standard object sizes incorporated in software provided to the image-capture device; iii) user identification of a reference object in a scene that contains the at least one object of interest; and iv) data from which measurements of the at least one object of interest can be derived, wherein such measurement data is generated from a combination of inertial sensors associated with the image- capture device, where the sensors provide data comprising: (a) an acceleration value from an accelerometer associated with the image-capture device; and (b) an orientation value provided by a gyroscopic sensor present the image-capture device.
[0082] With regard to an intrinsic reference derived from the focal length of the image- capture device most existing image-capture devices {e.g., cameras) comprise a short depth of field, resulting in images which appear focused only on a small 3D slice of the scene. Such features can be utilized in the present invention to allow estimation of the depth or 3D surface of an object of interest from a set of two or more images incorporating that object. These images can be obtained from substantially the same point of view while the image-capture device parameters (e.g., the focal length) are modified. Using this technique, the amount of blur in captured images can be used to provide an estimation of the object depth where such depth can be used to derive measurements of one or more dimensions of interest of the object.
[0083] In a further embodiment of the intrinsic reference feature of the present invention, a library of standard object identities and sizes can be included in the software associated with the image-capture device to provide data from which measurement data for the at least one object of interest can be derived. For example, the size of one or more objects can serve as a reference when that object appears in the same scene as the at least one object of interest. For example, if a single toggle light switchplate, which has a standard US size of 4.5 inches (11.42 cm) in height and 2.75 inches (6.985 cm) in width, appears in a scene with an object of interest, the known standard dimensions of this switchplate can be used as an intrinsic reference to provide a point of reference from which the dimensions of the object of interest can be derived. In some aspects, the user can identify the intrinsic reference object manually or object recognition methodologies can be used to automatically process the dimension data. The reference object used as the intrinsic reference can be generated from a database of digital photographic and/or video images that are likely to occur in a given environment, for example. In another aspect, a database of common objects present in a construction or contractor setting can be included in software configurations directed toward such users. Items related to household furnishings can be included in software configurations directed toward interior decorators. More broadly, the database may include photographic and/or video images of structures within some general use or location.
[0084] In a third aspect, the intrinsic reference can be provided by user identification of an object of interest that can serve as a reference. In this regard, the software associated with the image-capture device and/or the hardware into which the image-capture device is integrated can be configured to allow the user to select an object in the scene to serve as a reference, such as by way a user interface. The user can measure the reference object directly and input the measured value or he can select from a library of standard objects as discussed previously where such database is associated with the software of the present invention. For example, if the identified reference object that will serve as the intrinsic reference for providing measurement of an object of interest present in the scene is a switchplate cover, the system 101 will elicit and receive the specification of an object to be used for dimensional calibration, and the user will select the switchplate cover to serve as the intrinsic reference. The user can then measure the dimensions of the switchplate cover and input the dimensions into the appropriate fields in the user interface when that information is elicited. Calculations of the dimensions of the object of interest will then be provided using the methodology set out elsewhere herein. Alternatively, in some embodiments, in response to the system eliciting an object to be used for dimensional calibration, the user selects the switchplate cover as a reference object and the standard dimensions of a switchplate cover are obtained from a library of standard object sizes incorporated within the software associated with the image-capture device, thereby allowing the measurements of an object of interest to be obtained as set out elsewhere herein.
[0085] In a further embodiment, the intrinsic reference can be provided by sensor data obtained from inertial sensors associated with the image-capture device. In some embodiments, calculating the image-capture device displacement between two images/frames allows resolution of scale ambiguity. In some embodiments, the image-capture device displacement is extracted from data that inertial sensors (e.g., accelerometer and gyroscope) in the image-capture device. In particular, a gyroscope measures orientation based on the principles of angular momentum. An accelerometer, on the other hand, measures gravitational and non- gravitational acceleration. In some embodiments, integration of inertial data generated by movement of the image-capture device over time provides data regarding displacement that, in turn, is utilized to generate measurements of one or more dimensions of the object of interest using known methods.
[0086] In some embodiments, image-capture device- specific data is obtained by system 101 to provide more accurate measurement of the at least one object of interest. To achieve such accuracy, the actual image-capture device specifications such as, for example, focal length, lens distortion parameter and principal point are determined through a calibration process. In certain embodiments, a self-calibration function is performed without image-capture device details, which can occur when such details are not stored. In this regard, software associated with the image-capture device can suitably estimate information needed to provide measurements of the at least one object of interest. In some embodiments, self-calibration of the camera is conducted using the epipolar geometry concept. Epipolar geometry between each image pair can provide us an estimated value of the focal length. The collection of these estimations is used in a prediction model to predict an optimum focal length value.
[0087] The methods, systems, devices, and software aspects of the invention can be carried out on a wide variety of devices that can generally be categorized by the term "image-capture device." As used herein, such image-capture devices in use today are integrated into mobile devices such as "smartphones," mobile telephones, "tablets," "wearable devices" (such as where a camera may be embedded or incorporated into clothing, eyeglasses or functional jewelry, etc.), laptop computers, unmanned aerial vehicles (UAVs; e.g., drones, robots), etc. Still further, the image-capture devices 182 and 184 (see Figure 1A) can be associated (such as by being in communication with) desktop computers 186 and cloud-based computers 181. It is
contemplated by the inventors herein that innovations in image-capture devices will be introduced in the future. Such image-capture devices are included in the present invention if these devices can be configured to incorporate the inventive methods herein.
[0088] In various aspects of the invention, all or some portion of the processes claimed herein can be carried out on a portable device that includes suitable processing capability. In recent years, there has been a proliferation of smartphones. Exemplary operating
systems/smartphones are IOS/iPhone®, Android®/Samsung Galaxy® and Windows®/Windows Phone®). As would be recognized, smartphones are wireless, compact, hand-held devices that, in addition to basic cellular telephone functions, include a range of compact hardware. Typical smartphones have embedded (or "native") digital cameras that include both video and static image-acquisition capabilities, large touchscreen displays, and broadband or Wi-Fi capabilities allowing for the receipt and transmission of large amounts of data to and from the Internet. More recently, tablet computers and wearable devices have emerged that provide, in pertinent part, many of the functionalities of smartphones, including image capture and processing capabilities and WiFi and cellular capabilities.
[0089] Smartphones, tablets and wearable devices not only include a range of hardware, they are also configured to download and run a wide variety of software applications, commonly called "apps." The proliferation of mobile devices, with their combination of portable hardware and readily loaded software applications, creates a platform upon which many aspects of the invention may be practiced.
[0090] In certain aspects, the invention advantageously utilizes basic features of
smartphones, tablets, and wearable devices, and extends the capabilities of these devices to include accurate and convenient measurement of one or more objects of interest by using the image-capture devices native on such devices. In further embodiments, the processes described herein may convert a common smartphone, tablet, wearable device, standalone camera or the like into a measurement tool, medical device or research tool, for example. Such aspects will benefit users by extending the functionality of these devices. [0091] While use of multi-function smartphones, tablets, wearable devices, or the like that incorporate image-capture devices suitably allow implementation of the methodology herein, devices that include less functionality, such as "standalone" digital cameras or video cameras, are also used in some embodiments. Such image-capture devices generally include WiFi and/or cellular capabilities, as well as "apps" so as to provide networked functionality. Accordingly, such image-capture devices can suitably be utilized in accordance with one or more of the inventions herein. One example of a standalone digital camera that can be used is the GoPro® H3.
[0092] In a further example, the methods herein can be performed on a single-purpose device. For example, an image-capture device intended for use by professionals who work with exterior and interior building spaces (e.g., architects, contractors, interior designers, etc.) can be configured with hardware and software suitable to allow the users to obtain measurements that they can use in their respective professional responsibilities. One example of such
implementations is detailed in the co-assigned US Provisional Patent Application No.
62/165,995 filed May 24, 2015 entitled "Interior Survey Devices and Methods," the disclosure of which is incorporated by reference in its entirety.
[0093] The methods herein can also be provided in the form of an application specific integrated circuit ("ASIC") that is customized for the particular uses set out herein. Such ASIC can be integrated into suitable hardware according to known methods to provide a device configured to operate the methods herein.
[0094] Still further, the present invention relates to mobile devices and the like that are configurable to provide substantially accurate measurements of at least one object of interest, where such measurements are derived from a 3D digital representation of the object of interest obtained according to the methodology herein. In one aspect, for example, the dimensions of a roof can be obtained using a single video camera that includes passive image-capture capability, such as that embedded in a mobile device, thereby eliminating the need to send a person to the location to measure the size of the roof to provide an estimate. Yet further, the dimensions of a kitchen (or, more broadly, any room or interior of a structure) can be obtained using the passive image-acquisition and processing methods herein thereby allowing cabinets or the like to be sized accurately without the need to send an estimator to the customer' s home. Yet further, accurate dimensions of a floor area can be provided using measurement derived from distances from wall to wall in a room so as to provide an estimate of the amount of materials needed for a flooring project. As would be recognized, the ability to obtain accurate measurement of locations such as roofs, kitchens, flooring and other locations would provide significant benefits to contractors who currently must first visit a location to obtain substantially accurate measurements prior to being able to provide a close estimation of the cost of a construction job. Such applications are described in the co-assigned US Provisional Application No. 62/165,995, previously incorporated herein.
[0095] In a further embodiment, in some embodiments, the devices and methods herein are used to provide substantially accurate measurements and characteristics of a person's body so as to allow custom clothing to be prepared for him or her without the need to visit a tailor. In some embodiments, such accurate body measurements are used to facilitate telemedicine applications.
[0096] Yet further, in some embodiments, the invention herein provides accurate measurement of wound size and other characteristics present on a human or an animal.
Accordingly, the present invention further relates to medical devices configured with image- capture devices and associated software that provide the disclosed benefits and features.
[0097] In further embodiments, the accurate 3D digital representations of the object(s) can be used create accurate 3D models of the object of interest, where such 3D models can be generated using 3D printing devices, etc.
[0098] In some embodiments, the methodology herein is utilized in conjunction with navigation utilized for robots, unmanned autonomous vehicles and the like where such navigation utilizes image-capture devices therein. In one example, the present invention can be incorporated with Simultaneous Localization And Mapping ("SLAM"). As would be recognized, SLAM is a method that used in robotic navigation where a robot or autonomous vehicle estimates its location relative to its environment, while simultaneously avoiding any dangerous obstacles. The autonomous vehicle makes observations of surrounding landmarks from poses obtained from one or more image-capture devices associated with the vehicle and probabilistic methods are used to achieve maximum likelihood estimation of the camera trajectory and 3D structure. Although many research efforts have been undertaken on this topic in the robotics and computer-vision communities, at this time no conventional methodology can suitably provide a substantially accurate dense 3D mapping of large-scale environments because the focus of existing methodologies is primarily directed towards accurate estimation of camera trajectory.
[0099] In a further example, the methods herein can be performed on a single purpose device. For example, an image capture device intended for use by professionals who work with interior and exterior areas and building spaces (e.g., architects, contractors, interior designers, engineers, landscapers etc.) can be configured with hardware and software suitable to allow the users to obtain information such as measurements that they can use in their respective professional responsibilities. In one regard, a device configured specifically to generate surveys of interior and exterior scenes using the inventive methods herein comprises an inventive survey device.
[00100] In further embodiments, the present invention relates to devices and methods for generating surveys of interior and exterior scenes or objects in the scenes using image capture devices associated with image processing techniques suitable to allow survey information to be returned quickly to a user and where such survey information can optionally be further processed. Such surveys and their included survey information relating to interior and exterior scenes can be used in applications such as construction/remodeling estimation, 3D model generation, insurance policy underwriting and adjusting, interior and exterior design efforts, real estate marketing, inventory management and other areas where it can be desirable to obtain information about features and dimensions of one or more features or objects present in the scene.
[00101] In one embodiment, the surveying devices and methods can capture information such as measurements, features, dimensions, quantity etc. relating to interior and exterior scenes or objects in the scene while a user is on-site and such information can be returned quickly to the user for use thereof. In use, an image capture device, such as those mentioned previously, can be used to generate images of one or more areas of interest.
[00102] The images used to generate interior survey information according to the present invention can be processed using microprocessor capability native in the image capture device comprising the image capture capability or the images and associated data can be transmitted to a remote server (e.g., to the cloud) for processing outside of the device. If the images are processed outside of the image capture device, such as on a remote server, the interior location survey information generated from the images can be returned to the user (e.g., provided on a smartphone or tablet or available for use on a PC etc.). For example, the survey information can be returned for use in one or more apps associated with the user's device. In one example, an app can use the survey information obtained from the processed images to provide takeoff information to a user. Alternatively, the survey information can be utilized for a variety of uses as discussed elsewhere herein. [00103] The survey information derived from images obtained of the scenes or locations of interest can be used to, for example, generate floorplans, takeoff information, and interior design information to provide information to insurance companies, for real estate marketing, 3D model generation, inventory management and the like. In one or more of such aspects, the present invention allows one to obtain information regarding one or more of measurements, location, direction, fixtures (e.g., appliances, furniture, built-in cabinets etc.), floor, wall and ceiling dimensions, the presence or absence of doorways and windows, electrical and plumbing locations, property dimensions (e.g., television size etc.), as well as other information that can be derived from a survey of an interior scene or location. Aspects of commercial and residential interior scenes that can be suitably surveyed with the devices and methods of the present invention include in illustrative examples one or more of Internal walls that are straight or curved, stairways, doors, windows, cutouts, holes, island areas, borders, insets, flooring and ceiling dimensions, etc.
[00104] The surveying devices and methods of the present invention can be utilized to obtain one or more floorplans associated with an interior location of interest. As used herein, a "floorplan" is a drawing to scale of a location showing a view from above of the relationships between rooms, spaces and other physical features at one level of a structure. Dimensions can be drawn between the walls to specify room sizes and wall lengths. Floorplans may also include details of fixtures like sinks, water heaters, furnaces, manufacturing equipment, etc. In this regard, apps or other software associated with the present invention can be configured to automatically import measurements and dimensions onto a floorplan generated herein.
Floorplans can also include notes for construction to specify finishes, construction methods, or symbols for electrical items. The drawings obtainable from the devices and methods herein are equally suitable for printing to provide, for example, blueprints, or they can be made visible on a device screen for use in a non-paper environment.
[00105] Dimensional and other information obtained from the surveying devices and methods herein can be utilized in a variety of software-based applications. For example, the data can be generated in CSV (comma- separated values) form for utilization in construction estimation programs operating from a spreadsheet environment. Still further, the information can be utilized to generate Autocad® files that can be utilized to create, for example, 3D models of an interior location where such models may be used to generate architectural, construction, engineering, and other documentation related to a construction project. Even further, the interior survey information can be provided for use in the well-known DWG, DXF or STL file formats. [00106] In one embodiment, the information generated from the surveys of the present invention can be used to generate takeoff information, where such information can be further used to generate materials lists for use in construction, remodeling or the like. The devices and methods of the present invention have wide application to a number of construction and remodeling-related activities where accurate data relating to one or more dimensions of a location is needed. In this regard, and in separate embodiments, the present invention provides devices and methods for generating takeoffs applicable to construction, remodeling and interior or exterior design. Yet further, the present invention provides benefits in construction project management. To this end, the devices and methods of the present invention facilitate
management of inventory management of construction elements and can further enable a contractor to rapidly perform engineering cost analyses while a project is underway, thus better allowing the effects of revisions and change orders on scheduling and project cost to be assessed.
[00107] A common requirement in construction projects is the generation of an estimate of the cost of a product by utilizing drawings obtained from measurement of a location. As would be recognized, in order to submit a bid for the construction project, a manufacturer's
representative or estimator should accurately estimate the cost of the manufacturer's components required for the construction project. This estimate can then be used as part of the bidding process and/or as part of the materials pricing process. "Takeoff is an estimation of the quantities needed to undertake and complete a project based on the drawings and specifications. Quantity takeoff is generally the first part of an estimating process. The remainder of the estimating process includes determining material selection and cost. Quantities may include numerical counts, such as the number of doors and windows in a project, but may also include other quantities such as the volume of concrete or the lineal feet of wall space. In order to estimate the cost of the manufacturer's components, the manufacturer's representative must "takeoff all of the manufacturer's components from the paper drawing of the construction project." As would be recognized, takeoffs can be the most time consuming part of a construction or remodeling project because multiple measurements must first be made of the relevant scenes or objects. Moreover, no payment is generally provided for providing takeoffs because they are part of the bidding process. Errors in creating takeoffs using existing methods also generally mandate that amounts of components obtained from analysis of interior dimensions be increased by at least 10%, and sometimes as much as 25%. Because many components are not returnable for credit at the completion of a job, such extra materials cause construction and remodeling jobs to be more expensive and create construction waste. It has been found that the time and accuracy of takeoffs can be greatly enhanced using the surveying devices and methods of the present invention.
[00108] In one example, the surveys generated by the present invention can provide survey information for all or part of a flooring area in an interior location in which flooring materials are to be installed. That is, the surveys of the present invention can be used to provide information that can be used to generate flooring takeoffs. As used herein, "flooring materials" are broadly defined to include carpet, carpet tile, ceramic tile, laminate flooring and similar materials. To illustrate use of one aspect of the present inventions, processing to provide flooring material takeoff information generally comprises the following steps that are needed in order for a bid to be provided: the manufacturer's representative or estimator, using the relevant survey information generated from the inventive interior surveying devices and methods herein selects for calculation one or more flooring components, calculates the number of components required and calculates the cost of the components. If the contract is awarded for the flooring, a parts list can also be generated for ordering and inventory management for the needed components. The present invention provides improvements in devices and methods to allow such flooring takeoffs to be obtained more quickly and easily and, in some aspects, the survey information obtained herein can provide more accurate information, thus leading to more accurate takeoff information obtainable therefrom. In particular, measurement and other pertinent dimensional information can be derived from the survey information generated according to the invention herein, thus providing improvements in flooring takeoff generation.
[00109] In addition to providing takeoff information regarding flooring materials, the inventive devices and methods can also be used to provide accurate information regarding one or more of carpet seam layout and manipulation, cut waste optimization, roll cut sheet
manipulation, pattern carpet matching and tile pattern layout. For example, when used to match carpet patterns, a carpet section comprising a pattern can be overlayed onto a floorplan to allow a designer or installer to generate the most optimum placement of carpet sections. Such functionality can not only improve the aesthetic appearance of a final carpet installation, but waste from excess remnant generation can also be reduced.
[00110] As a further example of uses for the surveys of the present invention, virtual tours can be obtained from the survey information generated from image processing as described elsewhere herein. Virtual tours can provide a way for a user to view different parts of a physical location or site from different perspectives and in different directions. As would be recognized, such virtual tours can be useful in many different contexts where it may be advantageous for a user remotely to see a physical site from different angles and points of view. Examples of physical sites that can be presented in a virtual tour of a location include: a house or other property for sale; a hotel room or cruise ship stateroom; a museum, art gallery, or other sightseeing destination, a factory or facility plant for training purposes or the like.
[00111] When combined with the ability to convey accurate dimensions of the location, the value of a virtual tour can be greatly improved. For example, when applied in a real estate marketing context, the survey information provided by the survey devices and methods of the present invention can allow a potential buyer or renter to see the actual dimensions of a room to determine whether her furniture or other fixtures will fit. In further embodiments, the ability to obtain accurate dimensions of scenes from the surveying devices and methods of the present invention can allow a user to overlay pictures of furniture etc. that she wishes to buy onto a floorplan or even an actual image of the room to make sure the furniture will fit prior to making a purchase.
[00112] Yet further, the surveys of the present invention can be utilized to generate information useful for insurance underwriting and/or for claims adjustment. For example, when a destructive event occurs, such as a fire, it can be difficult for an insurance adjuster to validate the insured' s representations of the condition and value of the interior and/or exterior of a building prior to the event. In this regard, an insurance company can obtain a floorplan, virtual tour or the like of an insured's house or facility as a requirement for underwriting a policy or by providing a discount to an existing policy holder. An insurance company may also be interested in obtaining takeoff information and, as mentioned previously, such information is obtainable from the devices and methods herein. Because the survey functionality of the present invention provides substantially accurate dimensions of an interior and exteriors location to be acquired, along with those of any fixtures incorporated therewith, an insurance company that obtains such a survey prior to an occurrence of a destructive event that results in a claim event can better ensure that the information provided by the insured accurately matches the conditions of the location existing prior to the destructive event.
[00113] Three-dimensional (3D) models of interior and exterior locations can further be derived from the survey information obtained using the presently described inventions. Besides applicability to virtual tours as described elsewhere herein, such 3D models can be utilized to provide users with an immersive experience regarding a remote location. For example, a 3D model of an airplane interior can allow a potential traveler to understand how much legroom he will have on a flight. Such 3D models can also allow a user to remotely travel to a store to generate an improved online shopping experience. Yet further, 3D models obtained from the interior and exterior surveys of the present invention can be used to provide immersive learning experiences for remote training or the like.
[00114] In a further embodiment, the survey devices of the present invention can be used to provide inventories of fixtures or equipment or stock present at a location. For example, one or more images can be taken of a location from which the number of items present can be derived using the survey information obtained from the images. When such images are obtained from image capture devices present in a warehouse or other type facility, real time inventory management information can be obtained and the present invention has utility in security applications and the like.
[00115] Yet further, the survey devices and associated information derived from images generated and processed according to the present invention can be compared to information obtained from a library of image information stored or otherwise obtainable by a user. In another embodiment, a database of common objects present in a construction or contractor setting can be included in apps or other software implementations directed toward such users. When information obtained from items present in the standard library is used along with information obtained from the images generated from the survey devices and methods of the present invention, the presence or absence of such standard items can be determined. For example, if a survey provides information that an object with a size of 4.5 inches (11.42 cm) in height and 2.75 inches (6.985 cm) in width is present in a scene, associated software can return information to the user that the object in the scene is, in high likelihood, a standard US toggle switchplate. Such information can allow a user easily to obtain information regarding the number and location of light switches in a location from images. Still further, the library of image data associated with the survey inventions herein can be included in software
configurations directed toward interior decorators, facilities designers, project managers or the like.
[00116] The survey devices and associated information of the present invention can be used further to generate information regarding interior or exterior locations relating to the presence or absence of required fixtures or equipment. For example, an interior survey can be conducted with the devices and methods of the present invention to determine whether a required piece of equipment is present in a location. In an example of this, many locations require the presence of defibrillators or other safety equipment in a prescribed number and in certain locations. The devices and methods of the present invention can be used to obtain surveys of such locations. Because the required equipment will have a known size and required orientation in a room, survey information obtained from the devices and information of the present invention can be used to determine whether the required equipment— a defibrillator in this example— is present.
[00117] Yet further, the survey devices and associated information can be used to determine whether a facility or other location complies with the Americans with Disabilities Act or other types of government regulations where locations are required to have fixtures or construction elements having a certain configuration. In this regard, survey information obtained according to the present invention can allow determination of whether doorways are suitably wide, ramps are present, etc.
[00118] The survey devices and associated information can also be used to conduct ground level survey information. The types of outdoor survey information obtainable using the inventive methodology are varied. For example, the inventive devices and methodology can be used to generate one or more of construction surveys, as-built surveys, to generate exterior fixture and equipment inventories, landscaping plans and the like.
[00119] The surveys and associated information can also have utility for forensic science applications. For example, the devices and methods herein can be used for documenting a crime scene and can provide a capability to make subsequent measurements using captured floorplan and image data for use as evidence. The ability of an investigator to obtain accurate
measurements from the actual images taken in a crime scene can greatly enhance the evidentiary quality of information obtained from the crime scene.
[00120] The surveys and associated information can also be used in any application in which surveys that capture accurate measurements objects, fixtures, construction features, etc. of interior or exterior scenes or objects in a scene are desired, where such accurate measurements are generated from images derived from image capture devices as described elsewhere herein.
[00121] Referring to Figure 3 and with regard to use of the methodology herein with robotic navigation and the like, a video stream 300 is provided to method 301. These images can be derived from a video image-capture device (such as camera 184 of Figure 1A) as discussed elsewhere herein. In some embodiments, line segments and corner points are detected and tracked in block 305 and block 310, respectively, of method 301. The output of the detection and matching process of block 305 and block 310 includes corresponding line tracks 315 and corresponding point tracks 320. In block 325 SLAM is conducted as set out in more detail in the description of Figure 4. In some embodiments, an initial estimation of structure and motion data resulting from block 325 is determined based on the recovered structure data in block 330. In some embodiments, hybrid bundle-adjustment techniques 335 are used to further refine/optimize the 3D structure and motion data 340 and are used, in some embodiments, in process 345 to generate a 3D point, line and/or edge cloud 350 representative of the at least one object of interest. In some embodiments, 3D structure and motion data 340 are used in a 3D plane detection process 355 to detect 3D planes 360. In some embodiments, the 3D point, line and/or edge cloud 350 and 3D planes 360 are included in intelligent data smoothing in 365 to generate a 3D digital representation 370 incorporating the at least one object of interest.
[00122] Referring to Figure 4, in some embodiments, SLAM 325 of Figure 3 is implemented using method 401. In some embodiments, a proper image-capture device (e.g., camera) motion model 400 is initially identified. In some embodiments, such selection is as simple as a model that represents constant (or substantially constant) directional and angular velocity, or in other embodiments, it is more complex. Once the motion model is identified in block 400, video frames 405 are read one by one. For each new video frame 405, an initial estimation of the camera pose is calculated according to predictions from the camera motion model selected in block 400. Previously detected features are tracked according to visibility constraints and new features are detected, if necessary. In block 410, each new feature is parameterized using inverse depth. The feature-tracking information combined with the predicted motion in block 415 to allow determination of future locations in block 420. Once these locations are determined, in block 425 the predicted camera pose and 3D structure are refined based on the new observations. These observations are also used to update the camera motion model in block 430. A parallax is calculated for each feature in block 435 according to the updated parameters and if a suitable parallax is observed, the Euclidean representation is used to replace inverse depth parameterization. A semi-global optimization is then applied based on the visibility information to find the maximum likelihood estimation of the camera poses and 3D structure in block 440. This process is repeated until all video frames are determined by block 445 to have been processed, and method 401 provides the initial estimation of structure and motion of block 330 (referring again to Figure 3).
[00123] In conjunction with the methods herein, in some embodiments, the software associated with the image-capture device and/or the hardware into which the image-capture device is integrated is configured to provide the user with interactive feedback with regard to the image-acquisition parameters. For example, in some embodiments, such interactive feedback provides information regarding the object of interest including whether the tracking is suitable to obtain a plurality of overlapping 2D images necessary to provide suitable images from which 3D digital representations of the object(s) of interest can be generated to provide substantially accurate measurements or other useful information relating to the object. In some embodiments, such processing is conducted in the image-capture device itself (e.g., device 182 or device 184 of Figure 1 A) or the hardware in which the device is integrated (e.g., smartphone, wearable device, etc.). In other embodiments, the processing is performed "in the cloud" on a server 181 that is in communication with the image-capture device/hardware. In other embodiments, the processing is performed on any device (e.g., device 186 of Figure 1A) in communication with the image-capture device and/or hardware. In some embodiments, such processing is performed on both the device/hardware and an associated server, where decision-making regarding the location of various parts of the processing may depend on the speed and quality that the user needs results. Yet further, in some embodiments, user feedback is provided in real time, in near real time or on a delayed basis.
[00124] Yet further, in some embodiments, the user display of the 3D digital representation thereof is configured to provide user generated inputs to facilitate generation of the plurality of overlapping 2D images of the at least one object of interest, the 3D digital representations of the object(s) of interest and/or the extracted measurement values. In some embodiments, such user generated inputs include, for example, the level of detail, a close-up of a portion of the point cloud/image, optional colorization, a desirable level dimension detail, etc.
[00125] In a further embodiment, the software associated with the image-capture devices and methods herein is configured to provide an accuracy value for the 3D digital representations of the object(s). By reporting a level of accuracy (where such accuracy is derivable as set out elsewhere herein), a user will obtain knowledge about accuracy of the extracted measurement or other dimensional value of the at least one object of interest.
[00126] In some embodiments, the software associated with the image-capture devices and/or hardware in which the image-capture device is integrated is configured to elicit and receive from the user a selection of a region/area of interest in a captured image(s) of the object of interest. For example, in some embodiments, when a scene in which an object of interest is captured, the software elicits and receives selection of a specific object appearing in the scene. In an exemplary configuration of such an implementation, the scene presented to the user through a viewfinder or screen on the image-capture device elicits and receives the selection of an object present in the scene such as by touch or other type of method. The object of interest can be identified or selected by a computer or a user. In some embodiments, the identified object is then analyzed in accordance with the methods herein so as to provide an accurate 3D digital representation of the object(s).
[00127] In some embodiments, the methods of the present invention are suitable for use, and are performed, "in the cloud" (i.e., the software executes on server computers connected to the internet and leased on an as-needed basis). (Note that the word "cloud" as used in the terms "point cloud" described as part of the invention is independent of, and unrelated to, "cloud computing" as such.) As would be recognized, cloud computing has emerged as one optimization of traditional data processing methodologies. A computing cloud is defined as a set of resources (e.g., processing, storage, or other resources) available through a network that can serve at least some traditional datacenter functions for an enterprise. A computing cloud often involves a layer of abstraction such that the applications and users of the computing cloud may not know the specific hardware that the applications are running on, where the hardware is located, and so forth. This allows the computing cloud operator some additional freedom in terms of implementing resources into and out of service, maintenance, and so on. Computing clouds may include public computing clouds, such as Microsoft® Azure, Amazon® Web Services, and others, as well as private computing clouds.
[00128] Communication media appropriate for use in or with the inventions of the present invention may be exemplified by computer-readable instructions, data structures, program modules, or other data stored on non-transient computer-readable media, and may include any information-delivery media. The instructions and data structures stored on the non-transient computer-readable media may be transmitted as a modulated data signal to the computer or server on which the computer-implemented methods of the present invention are executed. A "modulated data signal" may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct- wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media. The term "computer-readable media" as used herein may include both local non-transient storage media and remote non-transient storage media connected to the information processors using communication media such as the internet. Non- transient computer-readable media do not include mere signals or modulated carrier waves, but include the storage media that form the source for such signals.
[00129] In some embodiments, the present invention provides a method for generating a 3D digital representation of an object of interest. This method includes: a) receiving a plurality of 2D digital images of a scene, wherein: the scene includes i) at least one object of interest, wherein the object of interest has a plurality of dimensions; ii) at least a portion of the plurality of the 2D digital images of the scene are overlapping with regard to the at least one object of interest; and iii) the plurality of 2D digital images are generated from a single passive image- capture device; and b) processing at least a portion of the plurality of overlapping 2D digital images that includes the at least one object of interest using a 3D reconstruction process that incorporates a structure-from-motion algorithm, thereby generating a 3D digital representation of the at least one object of interest, wherein measurements of one or more of the plurality of dimensions of the at least one object of interest are obtainable from the 3D digital representation.
[00130] In some embodiments, the present invention provides a method for generating a 3D digital representation of an object of interest. This method includes: receiving a plurality of 2D digital images of a scene, wherein: i) the scene includes at least one object of interest, wherein the object of interest has a plurality of dimensions; ii) at least a portion of the plurality of the 2D digital images of the scene are overlapping with regard to the at least one object of interest; and iii) the plurality of 2D digital images are generated from a single passive image-capture device. The method also includes processing at least a portion of the plurality of overlapping 2D digital images that includes the at least one object of interest using a 3D reconstruction process that incorporates a structure-from-motion algorithm, thereby generating a 3D digital representation of the at least one object of interest; and calculating measurements of one or more of the plurality of dimensions of the at least one object of interest from the 3D digital representation. Some embodiments further include displaying the 3D digital representation of the at least one object of interest. Some embodiments further include calculating the plurality of dimension measurements of the at least one object of interest from the 3D digital representation. In some embodiments, the single passive image-capture device is a video camera.
[00131] Some embodiments further include generating at least one of a 3D model, a 3D point cloud, a 3D line cloud, and a 3D edge cloud from the 3D digital representation, wherein each, independently, includes at least one of the plurality of dimensions of the at least one object of interest. In some embodiments, the measurements are obtainable substantially without a separate scaling step.
[00132] Some embodiments further include selecting one or more of the plurality of dimensions in the at least one object of interest, wherein each of the selected dimensions, independently, includes an actual measurement value; extracting measurement data from the selected dimensions; and processing the extracted measurement data to provide an extracted measurement value for each selected dimension.
[00133] In some embodiments, at least one of the selection steps is automatically performed by a computer. In some embodiments, either or both of the selection steps is elicited and received by a computer from a user. In some such embodiments, a pixel accuracy of each extracted measurement value, independently, is represented in pixel units according to the following formula:
((distance of object of interest from image-capture device)*
(image capture device sensor size))/
((image-capture device resolution *image-capture device focal length))
wherein "*" represents multiplication and "/" represents division.
[00134] In some such embodiments, the pixel accuracy of each extracted measurement value is about one pixel.
[00135] In some embodiments, each extracted measurement value of each selected dimension is, independently, within about 5% of each corresponding actual measurement value.
[00136] Some embodiments further include generating boundary information for the at least one object of interest.
[00137] In some embodiments, the present invention provides a computerized method of obtaining at least one measurement of an object of interest, This computerized method includes: a) receiving a plurality of 2D images of a scene from a single passive image-capture device, wherein the plurality of 2D images includes image data of at least one object of interest present in the scene, and at least a portion of the plurality of 2D images of the scene are at least partially overlapping with regard to the at least one object of interest, thereby providing a plurality of overlapping 2D images that includes the at least one object of interest; b) generating, by the computer, a 3D representation of the at least one object of interest, wherein the 3D digital representation is obtained from at least a portion of the 2D digital images incorporating the object using a process incorporating a structure-from-motion algorithm; c) eliciting and receiving selections, made by either or both the computer or the user, of one or more dimensions of interest in the at least one object of interest, wherein each dimension, independently, comprises an actual measurement value; d) extracting data, by the computer, from the 3D digital representation, wherein the extracted data comprises measurement data comprising information corresponding to each identified dimension; and e) processing, by the computer, the extracted measurement data to provide an extracted measurement value for each selected dimension.
[00138] In some embodiments, an accuracy of each extracted measurement value, independently, is represented in pixels according to formula:
((distance of object of interest from image-capture device)*
(image-capture device sensor size)) /
((image-capture device resolution*image-capture device focal length)).
[00139] In some embodiments, a pixel accuracy of each extracted measurement value is about one pixel.
[00140] In some embodiments, the plurality of 2D images includes video images.
[00141] Some embodiments further include generating boundary information for the at least one object of interest.
[00142] In some embodiments, the present invention provides a computerized method of boundary detection. In some embodiments, this method includes: a) receiving a plurality of 2D digital images of a scene, wherein: i) the scene includes at least one object of interest having a plurality of boundaries; ii) at least a portion of the plurality of 2D digital images is overlapping with regard to the at least one object of interest; iii) the plurality of 2D digital images are generated from single passive image-capture device; and b) processing at least a portion of the plurality of overlapping 2D digital images that include the at least one object of interest using a method that incorporates a structure-from-motion algorithm, thereby providing detected boundary information for at least a portion of the at least one object of interest, wherein the detected boundary information can be represented as at least one of: a 3D digital representation, a 3D model, a 3D point cloud, a 3D line cloud, and a 3D edge cloud, each corresponding to at least a portion of the at least one object of interest. In some such embodiments, the single passive image-capture device is a video camera. In some embodiments, the measurements of at least a portion of the at least one object of interest are obtainable from the detected boundary information.
[00143] At this time, there is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. There are various information-processing vehicles by which processes and/or systems and/or other technologies described herein may be implemented, e.g., hardware, software, and/or firmware, and that the preferred vehicle may vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
[00144] The foregoing detailed description has set forth various embodiments of the devices and/or processes for system configuration via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field
Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers, e.g., as one or more programs running on one or more computer systems, as one or more programs running on one or more processors, e.g., as one or more programs running on one or more microprocessors, as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the
mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal-bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc.; and a remote non-transitory storage medium accessed using a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.), for example a server accessed via the internet.
[00145] Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data-processing systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a data processing system via a reasonable amount of experimentation. Those having skill in the art will recognize that a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non- volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors, e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities. A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
[00146] The herein-described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected", or "operably coupled", to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being "operably coupleable", to each other to achieve the desired functionality. Specific examples of operably coupleable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
[00147] EXAMPLES
[00148] The inventive methodology was used with video captured from an iPhone 5s® (Apple, Inc., Cupertino, CA) to reconstruct interior environments so as to provide a floor layout with dimensions as described below. [00149] Example 1: Large Office
[00150] A video of a large office space was obtained. The video was reconstructed using the inventive methodology herein to provide a dense point-cloud output. FIG. 5A shows a point- cloud image 501 illustrating the dense point-cloud output.
[00151] Once the dense point-cloud illustrated by image 501 was reconstructed, boundaries were identified for the floor plane of the small office and a line cloud 511 (e.g., a floorplan of the room's boundary) was generated in accordance with the inventive methodology. FIG. 5B shows the point-cloud image 501 with the line cloud 511 of the room boundary superimposed on the point cloud 501.
[00152] Once the boundaries were identified, a wireframe of the floor layout was generated. This process served to optimize the extraction of the boundary to achieve sub-pixel level accuracy. FIG. 5C shows an image of the floor layout wireframe including line cloud 511 and six labeled edges (A, B, C, D, E, and F) of the line cloud 511.
[00153] After generation of a wireframe, dimensions of the floor layout were generated. Such dimensions were suitable for incorporation into a number of file formats as discussed elsewhere herein, in one example, a DXF file. As would be recognized, Drawing Exchange Format (DXF) is a CAD data file format developed by Autodesk for enabling data interoperability between AutoCAD and other programs.
[00154] Measurements of the floor layout were scaled using a Reference Scale of 13.3429 (scale was determined from the marker placed in the scene). The line identifiers (line IDs) in Table 4 correspond to the six labeled edges (A, B, C, D, E, and F) of FIG. 5C.
Line ID Scaled Length (in) Observed Length (in) Variance (in) % Error
A 255.51 256 -0.49 -0.192%
B 135.36 135 0.36 0.266%
C 1 17.194 1 17 0.194 0.1 66%
D 235.623 235 0.623 0.264%
E 174.79 175 -0.21 -0.120%
F 135.25 135 0.25 0.185%
TABLE 4
Legend:
Scaled length - measurement calculated by software from video input
Observed length - measurements captured manually using tape measure
Variance - difference between Scaled Length and Observed Length Error - percentage error of variance between Scale Length and Observed Length
[00155] As shown in Table 4, the measurement error in the floor layout obtained according to the inventive methodology was 0.266% or less as compared to the actual measured value.
[00156] Example 2: Curved-Wall Room
[00157] The inventive methodology was used to provide a floor layout of a room having a curved wall. The environment was captured with an iPhone 5S. FIGS. 6A, 6B and 6C illustrate the respective outputs of the dense point-cloud output, the line-cloud output and the wireframe output. In some embodiments, the curved wall is represented by a plurality of short straight-line segments in the line cloud 611 that approximate the curve to a suitable accuracy.
[00158] FIG. 6A shows a point-cloud image 601 illustrating the dense point-cloud output.
[00159] FIG. 6B shows the point-cloud image 601 with a line cloud 611 of the room boundary superimposed on the point cloud 601. Once the boundaries were identified, a wireframe of the floor layout was generated. This process served to optimize the extraction of the boundary to achieve sub-pixel level accuracy.
[00160] FIG. 6C shows an image of the floor layout wireframe including line cloud 611 and six labeled edges (A, B, C, D, E, and F) of the line cloud 611.
[00161] Measurements of the curved wall room were scaled using a Reference Scale of 18.5443 (scale was determined from the marker placed in the scene). The measurement outputs are shown in Table 5.
Line ID Scaled Length (in) Observed Length (in) Variance (in) % Error
A 203.78 204 -0.22 -0.108%
B 192.33 192 0.33 0.172%
C 96.183 96 0.183 0.190%
D 59.84 60 -0.1 6 -0.267%
E 84.17 84 0.17 0.202%
F 102.25 102 0.25 0.244%
TABLE 5
Legend:
Scaled length - measurement calculated by software from video input
Observed length - measurements captured manually using tape measure
Variance - difference between Scaled Length and Observed Length
Error - percentage error of variance between Scale Length and Observed Length [00162] As shown in Table 5, the measurement error in the floor layout obtained according to the inventive methodology was 0.267% or less as compared to the actual measured value of the curved wall room.
[00163] In some embodiments, the present invention provides a first method that generates a 3D digital representation of an object of interest. The first method includes: a) receiving, into a computer, a plurality of 2D digital images of a scene, wherein: i) the scene includes a first object of interest, wherein the object of interest has a plurality of dimensions; ii) at least a portion of the plurality of the 2D digital images of the scene are overlapping with regard to the first object of interest; and iii) the plurality of 2D digital images are generated from a single passive image- capture device; b) processing, by the computer, at least a portion of the plurality of overlapping 2D digital images that includes the first object of interest using a 3D reconstruction process that incorporates a structure-from-motion algorithm, thereby generating a 3D digital representation of the first object of interest; and c) generating, using the computer, measurements of a first plurality of the plurality of dimensions of the first object of interest from the 3D digital representation.
[00164] In some embodiments of the first method, the single passive image-capture device is a video camera.
[00165] Some embodiments of the first method further include: using the 3D digital representation for generating at least one of a 3D model, a 3D point cloud, a 3D line cloud, and a 3D edge cloud, wherein each, independently, comprises at least one of the plurality of dimensions of the first object of interest.
[00166] In some embodiments of the first method, the obtaining of the measurements is performed substantially without a separate scaling operation.
[00167] Some embodiments of the first method further include: a) selecting at least one of the plurality of dimensions in the first object of interest, wherein each of the selected dimensions, independently, comprises an actual measurement value; b) extracting measurement data from the selected dimensions; and c) processing the extracted measurement data to provide an extracted measurement value for each selected dimension.
[00168] In some embodiments of the first method, the selecting of the at least one of the plurality of dimensions is performed automatically by a computer.
[00169] In some embodiments of the first method, the selecting of the at least one of the plurality of dimensions includes eliciting and receiving into a computer information that specifies the at least one of the plurality of dimensions from a user.
[00170] In some embodiments of the first method, a pixel accuracy of each extracted measurement value, independently, is represented in pixel units according to formula:
((distance of object of interest from image-capture device)* (image capture device sensor size))/ ((image-capture device resolution *image-capture device focal length)).
[00171] In some embodiments of the first method, the pixel accuracy of each extracted measurement value is about one pixel.
[00172] In some embodiments of the first method, each value of the extracted measurement data of each selected dimension is, independently, within about 5% of each corresponding actual measurement value. Some embodiments of the first method further include: generating boundary information for the first object of interest.
[00173] In some embodiments, the present invention provides a second method that obtains at least one measurement of an object of interest. The second method includes: a) receiving a plurality of 2D images of a scene from a single passive image-capture device, wherein the plurality of 2D images includes image data of a first object of interest present in the scene, and at least a portion of the plurality of 2D images of the scene are at least partially overlapping with regard to the first object of interest, thereby providing a plurality of overlapping 2D images that includes the first object of interest; b) generating, by the computer, a 3D representation of the first object of interest, wherein the 3D digital representation is obtained from at least a portion of the 2D digital images incorporating the first object using a process incorporating a structure - from-motion algorithm; c) eliciting and receiving, from either or both the computer or the user, selection-identification information that identifies a plurality of dimensions of interest in the first object of interest, wherein each dimension, independently, comprises an actual measurement value; d) extracting data, by the computer, from the 3D digital representation, wherein the extracted data comprises measurement data comprising information corresponding to each identified dimension; and e) processing, by the computer, the extracted measurement data to provide an extracted measurement value for each selected dimension.
[00174] In some embodiments of the second method, an accuracy of each extracted measurement value, independently, is represented in pixels according to formula:
((distance of object of interest from image-capture device)* (image-capture device sensor size))/ ((image-capture device resolution*image-capture device focal length)). [00175] In some embodiments of the second method, a pixel accuracy of each extracted measurement value is about one pixel. In some embodiments of the second method, the images in the plurality of 2D images are video images. Some embodiments of the second method further include generating boundary information for the first object of interest.
[00176] In some embodiments, the present invention provides a third method that detects boundaries. The third method includes: a) receiving a plurality of 2D digital images of a scene, wherein: i) the scene includes a first object of interest having a plurality of boundaries, ii) at least a portion of the plurality of 2D digital images is overlapping with regard to the first object of interest, and iii) the plurality of 2D digital images are generated from single passive image- capture device; and b) processing at least a portion of the plurality of overlapping 2D digital images that include the first object of interest using a method that incorporates a structure-from- motion algorithm, thereby providing detected boundary information for at least a portion of the first object of interest, wherein the detected boundary information can be represented as at least one of: i) a 3D digital representation, ii) a 3D model, iii) a 3D point cloud, iv) a 3D line cloud, and v) a 3D edge cloud. In some embodiments of the third method, the single passive image- capture device is a video camera. In some embodiments of the third method, the measurements of at least a portion of the first object of interest are obtainable from the detected boundary information. In some embodiments, the first method, the second method and the third method are combined and executed as a single process.
[00177] As described above, the exemplary embodiments have been described and illustrated in the drawings and the specification. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and their practical application, to thereby enable others skilled in the art to make and utilize various exemplary embodiments of the present invention, as well as various alternatives and modifications thereof. As is evident from the foregoing description, certain aspects of the present invention are not limited by the particular details of the examples illustrated herein, and it is therefore contemplated that other modifications and applications, or equivalents thereof, will occur to those skilled in the art. Many changes, modifications, variations and other uses and applications of the present construction will, however, become apparent to those skilled in the art after considering the specification and the accompanying drawings. All such changes, modifications, variations and other uses and applications which do not depart from the spirit and scope of the invention are deemed to be covered by the invention which is limited only by the claims which follow.

Claims

WHAT IS CLAIMED IS:
1) A method for generating a 3D digital representation of an object of interest, the method comprising:
a) receiving, into a computer, a plurality of 2D digital images of a scene, wherein:
i) the scene includes a first object of interest, wherein the object of interest has a plurality of dimensions;
ii) at least a portion of the plurality of the 2D digital images of the scene are overlapping with regard to the first object of interest; and
iii) the plurality of 2D digital images are generated from a single passive image-capture device;
b) processing, by the computer, at least a portion of the plurality of overlapping 2D digital images that includes the first object of interest using a 3D reconstruction process that incorporates a structure-from-motion algorithm, thereby generating a 3D digital representation of the first object of interest; and
c) generating, using the computer, measurements of a first plurality of the plurality of dimensions of the first object of interest from the 3D digital representation.
2) The method of claim 1, wherein the single passive image-capture device is a video camera.
3) The method of claim 1, further comprising:
using the 3D digital representation for generating at least one of a 3D model, a 3D point cloud, a 3D line cloud, and a 3D edge cloud, wherein each, independently, comprises at least one of the plurality of dimensions of the first object of interest.
4) The method of claim 1, wherein the obtaining of the measurements is performed substantially without a separate scaling operation.
5) The method of claim 1, further comprising:
a) selecting at least one of the plurality of dimensions in the first object of interest, wherein each of the selected dimensions, independently, comprises an actual measurement value;
b) extracting measurement data from the selected dimensions; and
c) processing the extracted measurement data to provide an extracted measurement value for each selected dimension. 6) The method of claim 5, wherein the selecting of the at least one of the plurality of dimensions is performed automatically by a computer.
7) The method of claim 5, wherein the selecting of the at least one of the plurality of dimensions includes eliciting and receiving into a computer information that specifies the at least one of the plurality of dimensions from a user.
8) The method of claim 5, wherein a pixel accuracy of each extracted measurement value, independently, is represented in pixel units according to formula:
((distance of object of interest from image-capture device)*(image capture device sensor size))/ ((image-capture device resolution *image-capture device focal length)).
9) The method of claim 8, wherein the pixel accuracy of each extracted measurement value is about one pixel.
10) The method of claim 5, wherein each value of the extracted measurement data of each selected dimension is, independently, within about 5% of each corresponding actual
measurement value.
11) The method of claim 1, further comprising generating boundary information for the first object of interest.
12) A computerized method of obtaining at least one measurement of an object of interest comprising:
a) receiving a plurality of 2D images of a scene from a single passive image-capture device, wherein the plurality of 2D images includes image data of a first object of interest present in the scene, and at least a portion of the plurality of 2D images of the scene are at least partially overlapping with regard to the first object of interest, thereby providing a plurality of overlapping 2D images that includes the first object of interest;
b) generating, by the computer, a 3D representation of the first object of interest, wherein the 3D digital representation is obtained from at least a portion of the 2D digital images incorporating the first object using a process incorporating a structure-from-motion algorithm; c) eliciting and receiving, from either or both the computer or the user, selection- identification information that identifies a plurality of dimensions of interest in the first object of interest, wherein each dimension, independently, comprises an actual measurement value;
d) extracting data, by the computer, from the 3D digital representation, wherein the extracted data comprises measurement data comprising information corresponding to each identified dimension; and
e) processing, by the computer, the extracted measurement data to provide an extracted measurement value for each selected dimension.
13) The method of claim 12, wherein an accuracy of each extracted measurement value, independently, is represented in pixels according to formula:
((distance of object of interest from image-capture device)* (image-capture device sensor size))/ ((image-capture device resolution*image-capture device focal length)).
14) The method of claim 13, wherein a pixel accuracy of each extracted measurement value is about one pixel.
15) The method of claim 12, wherein the plurality of 2D images are video images.
16) The method of claim 12, further comprising generating boundary information for the first object of interest.
17) A method of boundary detection, comprising:
a) receiving a plurality of 2D digital images of a scene, wherein:
i) the scene includes a first object of interest having a plurality of boundaries, ii) at least a portion of the plurality of 2D digital images is overlapping with
regard to the first object of interest, and
iii) the plurality of 2D digital images are generated from single passive
image-capture device; and
b) processing at least a portion of the plurality of overlapping 2D digital images that include the first object of interest using a method that incorporates a structure-from-motion algorithm, thereby providing detected boundary information for at least a portion of the first object of interest, wherein the detected boundary information can be represented as at least one of:
i) a 3D digital representation,
ii) a 3D model,
iii) a 3D point cloud,
iv) a 3D line cloud, and
v) a 3D edge cloud. 18) The method of claim 17, wherein the single passive image-capture device is a video camera.
19) The method of claim 17, wherein the measurements of at least a portion of the first object of interest are obtainable from the detected boundary information.
20) The method of claim 17, further comprising:
c) processing, by a computer, at least a portion of the plurality of overlapping 2D digital images that includes the first object of interest using a 3D reconstruction process that incorporates a structure-from-motion algorithm, thereby generating a 3D digital representation of the first object of interest;
d) generating, using the computer, measurements of a first plurality of the plurality of dimensions of the first object of interest from the 3D digital representation;
e) eliciting and receiving, from either or both the computer or the user, selection- identification information that identifies a plurality of dimensions of interest in the first object of interest, wherein each dimension, independently, comprises an actual measurement value;
f) extracting data, by the computer, from the 3D digital representation, wherein the extracted data comprises measurement data comprising information corresponding to each identified dimension; and
g) processing, by the computer, the extracted measurement data to provide an extracted measurement value for each selected dimension.
EP15852327.4A 2014-10-22 2015-10-21 Photogrammetric methods and devices related thereto Withdrawn EP3210165A4 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201462066925P 2014-10-22 2014-10-22
US201562165995P 2015-05-24 2015-05-24
US14/826,113 US20160239976A1 (en) 2014-10-22 2015-08-13 Photogrammetric methods and devices related thereto
PCT/US2015/056752 WO2016065063A1 (en) 2014-10-22 2015-10-21 Photogrammetric methods and devices related thereto

Publications (2)

Publication Number Publication Date
EP3210165A1 true EP3210165A1 (en) 2017-08-30
EP3210165A4 EP3210165A4 (en) 2017-10-18

Family

ID=59381728

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15852327.4A Withdrawn EP3210165A4 (en) 2014-10-22 2015-10-21 Photogrammetric methods and devices related thereto

Country Status (1)

Country Link
EP (1) EP3210165A4 (en)

Also Published As

Publication number Publication date
EP3210165A4 (en) 2017-10-18

Similar Documents

Publication Publication Date Title
US9886774B2 (en) Photogrammetric methods and devices related thereto
US11783409B1 (en) Image-based rendering of real spaces
US11252329B1 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
US11657419B2 (en) Systems and methods for building a virtual representation of a location
WO2016065063A1 (en) Photogrammetric methods and devices related thereto
CA3058602C (en) Automated mapping information generation from inter-connected images
Fathi et al. Automated as-built 3D reconstruction of civil infrastructure using computer vision: Achievements, opportunities, and challenges
US10026218B1 (en) Modeling indoor scenes based on digital images
EP2976748B1 (en) Image-based 3d panorama
US11645781B2 (en) Automated determination of acquisition locations of acquired building images based on determined surrounding room data
US11632602B2 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
Sankar et al. Capturing indoor scenes with smartphones
US20190096089A1 (en) Enabling use of three-dimensonal locations of features with two-dimensional images
CN112154486B (en) System and method for multi-user augmented reality shopping
US11842464B2 (en) Automated exchange and use of attribute information between building images of multiple types
WO2014159483A2 (en) Translated view navigation for visualizations
US20230206393A1 (en) Automated Building Information Determination Using Inter-Image Analysis Of Multiple Building Images
EP4174772A1 (en) Automated building floor plan generation using visual data of multiple building
CA3069813C (en) Capturing, connecting and using building interior data from mobile devices
EP3210165A1 (en) Photogrammetric methods and devices related thereto
EP4358026A1 (en) Automated determination of acquisition locations of acquired building images based on identified surrounding objects

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20170519

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20170918

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/62 20170101AFI20170912BHEP

Ipc: G06T 7/579 20170101ALI20170912BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20180417