US10402676B2 - Automated system and methodology for feature extraction - Google Patents

Automated system and methodology for feature extraction Download PDF

Info

Publication number
US10402676B2
US10402676B2 US15/428,860 US201715428860A US10402676B2 US 10402676 B2 US10402676 B2 US 10402676B2 US 201715428860 A US201715428860 A US 201715428860A US 10402676 B2 US10402676 B2 US 10402676B2
Authority
US
United States
Prior art keywords
man
data points
made structure
running
readable medium
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/428,860
Other versions
US20170236024A1 (en
Inventor
Yandong Wang
Frank Giuffrida
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pictometry International Corp
Original Assignee
Pictometry International Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pictometry International Corp filed Critical Pictometry International Corp
Priority to US15/428,860 priority Critical patent/US10402676B2/en
Publication of US20170236024A1 publication Critical patent/US20170236024A1/en
Assigned to HPS INVESTMENT PARTNERS, LLC, reassignment HPS INVESTMENT PARTNERS, LLC, SECOND LIEN PATENT SECURITY AGREEMENT Assignors: PICTOMETRY INTERNATIONAL CORP.
Assigned to MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT reassignment MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT FIRST LIEN PATENT SECURITY AGREEMENT Assignors: PICTOMETRY INTERNATIONAL CORP.
Assigned to PICTOMETRY INTERNATIONAL CORP. reassignment PICTOMETRY INTERNATIONAL CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GIUFFRIDA, FRANK, WANG, YANDONG
Priority to US16/548,219 priority patent/US10796189B2/en
Application granted granted Critical
Publication of US10402676B2 publication Critical patent/US10402676B2/en
Priority to US17/063,255 priority patent/US11417081B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06K9/4604
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06K9/00201
    • G06K9/00536
    • G06K9/00637
    • G06K9/4652
    • G06K9/6221
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Definitions

  • Remote sensing technology has the ability to be more cost effective than manual inspection while providing pertinent information for assessment of roofing projects.
  • Images are currently being used to measure objects and structures within the images, as well as to be able to determine geographic locations of points within the image when preparing estimates for a variety of construction projects, such as roadwork, concrete work, and roofing. See for example U.S. Pat. No. 7,424,133 that describes techniques for measuring within oblique images. Also see for example U.S. Pat. No. 8,145,578 that describe techniques for allowing the remote measurements of the size, geometry, pitch and orientation of the roof sections of the building and then uses the information to provide an estimate to repair or replace the roof, or to install equipment thereon. Estimating construction projects using software increases the speed at which an estimate is prepared, and reduces labor and fuel costs associated with on-site visits.
  • feature extraction can go beyond features represented within an image and provide useful information on features missing from an image. For example, tree density within a forest, or changes to the tree density over time, may be determined using lack of trees, a feature within a known area of an image. Thus, the location of missing feature may be determined relevant in feature extraction of the image.
  • FIG. 1 illustrates a block diagram for automatic feature extraction of one or more natural and/or man-made structures within an image, in accordance with the present disclosure.
  • FIG. 2 illustrates a schematic diagram of hardware forming an exemplary embodiment of a system for automatic feature extraction of one or more natural and/or man-made structures within an image.
  • the system includes an image capturing system and a computer system.
  • FIG. 3 illustrates a diagrammatic view of an example of the image-capturing system of FIG. 2 .
  • FIG. 4 illustrates a block diagram of the image-capturing computer system of FIG. 3 communicating via a network with multiple processors.
  • FIG. 5 illustrates a screen shot of an image of a region having multiple objects of interest in accordance with the present disclosure.
  • FIG. 6 illustrates a screen shot of a point cloud of the region illustrated in FIG. 5 .
  • FIG. 7 illustrates a screen shot of a modified point cloud having data points at an elevation of interest.
  • FIG. 8 illustrates a screen shot of buildings identified using data points of a point cloud in accordance with the present disclosure.
  • FIG. 9 illustrates a screen shot of an image showing boundaries on a building identified using the modified point cloud of FIG. 7 .
  • FIGS. 10-12 illustrate an exemplary method for automated object detection in accordance with the present disclosure.
  • FIG. 13 illustrates the building of FIG. 9 extracted from the image.
  • FIG. 14 illustrates a screen shot of roof features identified in accordance with the present disclosure.
  • FIG. 15 illustrates an exemplary roof report generated in accordance with the present disclosure.
  • roofing industry may be used as an example
  • feature extraction in one or more images in any industry is contemplated.
  • identification of features absent within one or more images is also contemplated.
  • phraseology and terminology employed herein is for purposes of description, and should not be regarded as limiting.
  • the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variations thereof, are intended to cover a non-exclusive inclusion.
  • a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements, but may also include other elements not expressly listed or inherent to such process, method, article, or apparatus.
  • “or” refers to an inclusive and not to an exclusive “or”. For example, a condition A or B is satisfied by one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
  • any reference to “one embodiment,” “an embodiment,” “some embodiments,” “one example,” “for example,” or “an example” means that a particular element, feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment.
  • the appearance of the phrase “in some embodiments” or “one example” in various places in the specification is not necessarily all referring to the same embodiment, for example.
  • Circuitry may be analog and/or digital components, or one or more suitably programmed processors (e.g., microprocessors) and associated hardware and software, or hardwired logic. Also, “components” may perform one or more functions.
  • the term “component,” may include hardware, such as a processor (e.g., microprocessor), an application specific integrated circuit (ASIC), field programmable gate array (FPGA), a combination of hardware and software, and/or the like.
  • processor as used herein means a single processor or multiple processors working independently or together to collectively perform a task.
  • Software may include one or more computer readable instructions that when executed by one or more components cause the component to perform a specified function. It should be understood that the algorithms described herein may be stored on one or more non-transitory computer readable medium. Exemplary non-transitory computer readable mediums may include random access memory, read only memory, flash memory, and/or the like. Such non-transitory computer readable mediums may be electrically based, optically based, and/or the like.
  • the term user is not limited to a human being, and may comprise, a computer, a server, a website, a processor, a network interface, a human, a user terminal, a virtual computer, combinations thereof, and the like, for example.
  • features within one or more images may be extracted.
  • missing features within an image may be determined.
  • features within images and/or missing features within images may be catalogued (e.g., spatial cataloguing) within one or more databases for retrieval and/or analysis. For example, measurements of features (e.g., size of building(s), height of tree(s), footprint(s) of man-made or non-man made features) may be obtained.
  • features may be stored in one or more database with measurements of features associated therewith, in addition to other metadata associated with the feature and/or image (e.g., date, algorithms used).
  • one or more images having raster data depicting an object of interest may be obtained and stored in a geospatial database to be available for use in creating and/or interpreting a point cloud that overlaps with the one or more images, as shown in step 12 .
  • the raster data depicting the object of interest may depict colors of visible light in mostly three bands (red, green, and blue) or one or more other modalities.
  • the raster data may include information for hyperspectral imaging which collects and processes information from across the electromagnetic spectrum.
  • the raster data may include near infrared data or thermal data.
  • Each image may be geo-referenced such that geographical coordinates are provided for each point in the images.
  • Geo-referencing may include associating each image, e.g., raster data, with camera pose parameters indicative of internal orientation information of the camera, and external orientation information of the camera.
  • Internal orientation information includes, but is not limited to, known or determinable characteristics including focal length, sensor size and aspect ratio, radial and other distortion terms, principal point offset, pixel pitch, and alignment.
  • External orientation information includes, but is not limited to, altitude, orientation in terms of roll, pitch and yaw, and the location of the camera relative to the Earth's surface. The internal and external orientation information can be obtained in a manner described, for example, in U.S. Pat. No. 7,424,133.
  • the internal and external orientation information can be obtained from analyzing overlapping images using any suitable technique, such as bundle-adjustment. Techniques for bundle adjustment are described, for example, in U.S. Pat. Nos. 6,996,254 and 8,497,905.
  • the internal and external orientation information can be stored within metadata of the image, or can be stored separately from the image and associated with the image utilizing any suitable technique, such as a unique code for each image stored within a look-up field within the geospatial database.
  • a point cloud may be generated or obtained on and/or about the object of interest.
  • the point cloud may be generated using a 3D scanner, a detection system that works on the principle of radar, but uses light from a laser (these systems are known in the art and identified by the acronym “Lidar”), or from the images stored in the geospatial database using the internal and external orientation information for the images.
  • Laser laser
  • Techniques for generating a point cloud using internal and external information for the images as well as feature matching techniques are known to those skilled in the art.
  • Point clouds include a series of points in which each point may be identified with a three-dimensional (e.g., X, Y, and Z) position.
  • the three-dimensional position of points representing man-made or natural objects can be classified as such using the relative position of the points relative to other points, as well as the shape of a grouping of the points. Further analysis can be conducted on these points as well as images correlated with the points to extract features of the objects represented in the point cloud.
  • the point cloud may be formed such that all features having known locations within three dimensions are represented.
  • the point cloud(s) can be saved in any suitable format, such as point data, DSM/DTM, CAD, Tiff, Autodesk cloud, GeoTiff, and KMZ, for example.
  • the Z values of the points can be analyzed to determine whether the points represent the ground, or a man-made or natural object located above the ground.
  • an elevation value or an elevation gradient may be used to analyze the point cloud data to determine whether particular points represent the ground, a man-made object, or a natural object above the ground.
  • classification of ground surface and non-ground surface may be determined for each data point. Identification of features on the ground surface versus features on a non-ground surface may aid in differentiation between features within the image. For example, a shadow created by a roof may have similar characteristics to the roof and be difficult for detection within an image using spectral analysis. Identification that the shadow is on the ground surface, however, would differentiate the shadow of the roof from the actual roof.
  • Data points can be transformed with information classifying the type of object that the data point represents. For example, data points indicative of the ground surface may be classified within the point cloud as being part of the ground surface. In some embodiments, data points of certain type(s) of objects can be removed from the point cloud to enhance processing of the remaining type(s) of objects within a modified point cloud. For example, all of the data points classified as being part of the ground surface may be removed from the point cloud to enhance the analysis of the points representing man-made objects or natural objects.
  • Certain objects of interest may be identified, and in a step 18 , the object of interest may be detected using the point cloud and elevation gradient.
  • man-made structures, natural structures, and ground structures may also be determined.
  • Natural structures and ground structures may be classified, and/or removed from the point cloud forming a modified point cloud. For example, assuming that buildings are the object of interest, points in the point cloud representing the building will have a higher elevation (e.g., Z value) than points in the point cloud representing the ground surface. In this case, the points having a lower elevation Z-value than adjacent points can be classified as the ground surface, and the other points in the point cloud can be initially classified as either a man-made structure, or a natural structure.
  • the classification can be accomplished by storing additional data within the point cloud. Even further, an elevation of interest may be determined and all data points below the elevation of interest may be classified as the ground structure, or all data points above the elevation of interest may be classified as a natural object or a man-made object.
  • a point cloud is not needed for identification of features within an image or missing features within an image as described herein.
  • the point cloud may aid in identification of ground surface versus non-ground surface, but is not a mandatory step in determination of features within an image or features missing within an image.
  • the points within the point cloud that are initially classified as not being part of the ground structure are further analyzed to determine whether the points represent a man-made object (e.g., a building), or a natural object, (e.g., a tree). This can be accomplished by analyzing the shape of a grouping of the points. Groupings of points having planar surfaces (e.g., roof section(s)) detectable within the point cloud can be classified as a man-made object, and groupings of points devoid of planar surfaces (e.g., tree(s)) detectable within the point cloud can be classified as a natural object. This can be accomplished by analyzing a variation of surface normal direction between each point of a group of points and the other points within the group.
  • a man-made object e.g., a building
  • a natural object e.g., a tree
  • the points are classified as either representing a man-made object or a natural object.
  • points within the point cloud that represent the man-made object can be further analyzed to determine one or more features (e.g., roof) of the object of interest (e.g., building).
  • the features may be classified using the modified point cloud in which the points have been classified, and/or certain points have been removed from the point cloud.
  • one or more first information, (e.g., initial boundary) of the object of interest may be determined by looking for an outer boundary of a group of points, as well as analyzing the Z value of the points and looking for differences above a threshold between the Z values of adjacent points.
  • the boundaries of the object of interest may be further determined and/or refined using a first location (e.g., latitude and longitude) of the points within the point cloud that are determined to be part of the object of interest and querying the geospatial database to obtain images having raster data depicting the object of interest. Then, standard edge detection methods and/or spectral analysis can be used to precisely determine second information, e.g., the boundary of the object of interest having second location coordinates.
  • the second location coordinates can be X,Y pixel coordinates, or latitude/longitude and elevation.
  • the first information may be initially determined using the point cloud or modified point cloud, and then refined to generate second information by registering the point cloud data or modified point cloud data with the raster data within one or more images, and then analyzing the raster data with one or more suitable image processing technique.
  • suitable image processing techniques include, but are not limited to standard edge detection methods or spectral analysis methods.
  • Spectral analysis may be used to group data points of an object of interest. For example, data points of a feature within an image may have similar spectral signatures such that reflected and/or absorbed electromagnetic radiation may be similar and able to be differentiated from data points of other features within the image. For example, data points of a building may have spectral signatures different from data points of grass surrounding the building. Data points within the image having similar spectral signatures may thus be grouped to identify one or more features within the image. Exemplary spectral analysis methods are described in U.S. Pat. No. 9,070,018, the entire content of which is incorporated herein by reference.
  • thermal analysis of the object of interest may be used to group data points of objects of interest having similar or different thermal signatures.
  • a thermographic camera may be used to obtain an image using infrared radiation. Data points may be grouped based on temperature measurements of each feature within the image. Thermal analysis may be in addition to, or in lieu of a typical image (e.g., RGB image).
  • the object of interest may be extracted, classified, and/or isolated within the image. Additionally, the object of interest may be cataloged within one or more database. Cataloging may be via address, size of feature(s) and/or object of interest, color(s) of features and/or object of interest, feature type, spatial relations (e.g., address, coordinates), and/or the like.
  • the object of interest may be spatially cataloged within one or more database.
  • one or more features of the object of interest may be isolated and/or extracted.
  • One or more outlines of the one or more features may also be determined e.g., polygon outline of one facet of a roof. Each line of the outline may be spatially stored within a database such that retrieval may be via coordinates, address, and/or the like.
  • a step 26 further analysis of the object of interest and/or the image may be performed.
  • data points representing the outer boundaries can be used to calculate the perimeter of the roof; data points representing a ridge and a valley bordering a roof section can be used to calculate a pitch of the roof section.
  • These techniques can be used to calculate a variety of roof features, roof dimensions, and/or roof pitch of sections of the roof.
  • the roof outline can be saved as a data file using any suitable format, such as a vector format, or a raster format.
  • the calculated data and the data file of the roof outline can be saved in the geospatial database or a separate database that may or may not be correlated with the geospatial database. Further, the latitude and longitude of the data points can be used to determine a physical address of a particular object of interest (e.g., building) and such address can be stored with the calculated data. The calculated data, the image data and the address can be correlated together, and automatically used to populate a template thereby preparing a predetermined report about the object of interest including one or more images of the object of interest and calculated data about the object of interest. This methodology can be automatically executed by one or more processors as discussed herein to identify and obtain information about objects of interest for a variety of purposes.
  • the methodology can be automatically executed by one or more processors to generate reports for a plurality of objects of interest, without manual or human intervention.
  • the presently described methodology provides a performance increase over conventional methods for generating object reports, as well as an enhancement to the operation of the processor when generating reports for one or more objects of interest.
  • an image capturing system 28 may be used to obtain the one or more images.
  • the image capturing system 28 may include a platform 30 carrying the image capturing system 28 .
  • the platform 30 may be an airplane, unmanned aerial system, space shuttle, rocket, satellite, and/or any other suitable vehicle capable of carrying the image capturing system 28 .
  • the platform 30 may be a fixed wing aircraft.
  • the platform 30 may carry the image capturing system 28 at one or more altitudes above a ground surface 32 .
  • the platform 30 may carry the image capturing system 28 over a predefined area and at one or more predefined altitudes above the Earth's surface and/or any other surface of interest.
  • the platform 30 is illustrated carrying the image capturing system 28 at a plane P above the ground surface 32 .
  • the platform 30 may be capable of controlled movement and/or flight. As such, the platform 30 may be manned or unmanned. In some embodiments, the platform 30 may be capable of controlled movement and/or flight along a pre-defined flight path and/or course. For example, the platform 30 may be capable of controlled movement and/or flight along the Earth's atmosphere and/or outer space.
  • the platform 30 may include one or more systems for generating and/or regulating power.
  • the platform 30 may include one or more generators, fuel cells, solar panels, and/or batteries for powering the image capturing system 28 .
  • the image capturing system 28 may include one or more image capturing devices 34 configured to obtain an image from which a point cloud may be generated.
  • the image capturing system 28 may optionally include one or more LIDAR scanners 36 to generate data that can be used to create a point cloud.
  • the image capturing system 28 may include one or more global positioning system (GPS) receivers 38 , one or more inertial navigation units (INU) 40 , one or more clocks 42 , one or more gyroscopes 44 , one or more compasses 46 , and one or more altimeters 48 .
  • GPS global positioning system
  • INU inertial navigation units
  • the image capturing system 28 may include one or more thermographic cameras configured to capture one or more thermographic images. One or more of these elements of the image capturing system 28 may be interconnected with an image capturing and processing computer system 50 . In some embodiments, the internal and external orientation information for the images can be determined during post processing using an image processing technique, such as bundle adjustment. In these embodiments, the image capturing system 28 may not include the one or more INU 40 .
  • the one or more image capturing devices 34 may be capable of capturing images photographically and/or electronically.
  • the one or more image capturing devices 34 may be capable and/or configured to provide oblique and/or vertical images, and may include, but are not limited to, conventional cameras, digital cameras, digital sensors, charge-coupled devices, thermographic cameras and/or the like.
  • the one or more image capturing devices 34 may be one or more ultra-high resolution cameras.
  • the one or more image capturing devices 34 may be ultra-high resolution capture systems, such as may be found in the Pictometry PentaView Capture System, manufactured and used by Pictometry International based in Henrietta, N.Y.
  • the one or more image capturing devices 34 may include known or determinable characteristics including, but not limited to, focal length, sensor size, aspect ratio, radial and other distortion terms, principal point offset, pixel pitch, alignment, and/or the like.
  • the one or more image capturing devices 34 may acquire one or more images and issue one or more image data signals 52 corresponding to one or more particular images taken. Such images may be stored in the image capturing and processing computer system 50 .
  • the LIDAR scanner 36 may determine a distance between the platform 30 and objects on or about the ground surface 32 by illuminating a laser and analyzing the reflected light to provide data points.
  • software associated with the LIDAR scanner 36 may generate a depth map or point cloud based on the measured distance between the platform 30 on and/or about the object of interest.
  • the LIDAR scanner 36 may issue one or more data signals 54 to the image capturing and processing computer system 50 of such data points providing a point cloud wherein each data point may represent a particular coordinate.
  • An exemplary LIDAR scanner 36 may be the Riegl LMS-Q680i, manufactured and distributed by Riegl Laser Measurement Systems located in Horn, Austria. It should be noted that distance between the platform 30 and objects on or about the ground surface 32 may be determined via other methods including, but not limited to, stereographic methods.
  • the LIDAR scanner 36 may be a downward projecting high pulse rate LIDAR scanning system. It should be noted that other three-dimensional optical distancing systems or intensity-based scanning techniques may be used.
  • the LIDAR scanner 36 may be optional as a point cloud may be generated using one or more images and photogrammetric image processing techniques, e.g., the images may be geo-referenced using position and orientation of the image capturing devices 34 and matched together to form the point cloud.
  • the GPS receiver 38 may receive global positioning system (GPS) signals 56 that may be transmitted by one or more global positioning system satellites 58 .
  • the GPS signals 56 may enable the location of the platform 30 relative to the ground surface 32 and/or an object of interest to be determined.
  • the GPS receiver 38 may decode the GPS signals 56 and/or issue location signals 60 .
  • the location signals 60 may be dependent, at least in part, on the GPS signals 56 and may be indicative of the location of the platform 30 relative to the ground surface 32 and/or an object of interest.
  • the location signals 60 corresponding to each image captured by the image capturing devices 34 may be received and/or stored by the image capturing and processing computer system 50 in a manner in which the location signals are associated with the corresponding image.
  • the INU 40 may be a conventional inertial navigation unit.
  • the INU 40 may be coupled to and detect changes in the velocity (e.g., translational velocity, rotational velocity) of the one or more image capturing devices 34 , the LIDAR scanner 36 , and/or the platform 30 .
  • the INU 40 may issue velocity signals and/or data signals 62 indicative of such velocities and/or changes therein to the image capturing and processing computer system 50 .
  • the image capturing and processing computer system 50 may then store the velocity signals and/or data 62 corresponding to each image captured by the one or more image capturing devices 34 and/or data points collected by the LIDAR scanner 36 .
  • the clock 42 may keep a precise time measurement.
  • the clock 42 may keep a precise time measurement used to synchronize events.
  • the clock 42 may include a time data/clock signal 64 .
  • the time data/clock signal 64 may include a precise time that an image is taken by the one or more image capturing devices 34 and/or the precise time that points are collected by the LIDAR scanner 36 .
  • the time data 64 may be received by and/or stored by the image capturing and processing computer system 50 .
  • the clock 42 may be integral with the image capturing and processing computer system 50 , such as, for example, a clock software program.
  • the gyroscope 44 may be conventional gyroscope commonly found on airplanes and/or within navigation systems (e.g., commercial navigation systems for airplanes).
  • the gyroscope 44 may submit signals including a yaw signal 66 , a roll signal 68 , and/or a pitch signal 70 .
  • the yaw signal 66 , the roll signal 68 , and/or the pitch signal 70 may be indicative of the yaw, roll and picture of the platform 30 .
  • the yaw signal 66 , the roll signal 68 , and/or the pitch signal 70 may be received and/or stored by the image capturing and processing computer system 50 .
  • the compass 46 may be any conventional compass (e.g., conventional electronic compass) capable of indicating the heading of the platform 30 .
  • the compass 46 may issue a heading signal and/or data 72 .
  • the heading signal and/or data 72 may be indicative of the heading of the platform 30 .
  • the image capturing and processing computer system 50 may receive, store and/or provide the heading signal and/or data 72 corresponding to each image captured by the one or more image capturing devices 34 .
  • the altimeter 48 may indicate the altitude of the platform 30 .
  • the altimeter 48 may issue an altimeter signal and/or data 74 .
  • the image capturing and processing computer system 50 may receive, store and/or provide the altimeter signal and/or data 74 corresponding to each image captured by the one or more image capturing devices 34 .
  • the image capturing and processing computer system 50 may be a system or systems that are able to embody and/or execute the logic of the processes described herein.
  • Logic embodied in the form of software instructions and/or firmware may be executed on any appropriate hardware.
  • logic embodied in the form of software instructions or firmware may be executed on a dedicated system or systems, or on a personal computer system, or on a distributed processing computer system, and/or the like.
  • logic may be implemented in a stand-alone environment operating on a single computer system and/or logic may be implemented in a networked environment, such as a distributed system using multiple computers and/or processors.
  • a subsystem of the image capturing and processing computer system 50 can be located on the platform 30
  • another subsystem of the image capturing and processing computer system 50 can be located in a data center having multiple computers and/or processor networked together.
  • the image capturing and processing computer system 50 may include one or more processors 76 communicating with one or more image capturing input devices 78 , image capturing output devices 80 , and/or I/O ports 82 enabling the input and/or output of data to and from the image capturing and processing computer system 50 .
  • FIG. 4 illustrates the image capturing and processing computer system 50 having a single processor 76 . It should be noted, however, that the image capturing and processing computer system 50 may include multiple processors 76 . In some embodiments, the processor 76 may be partially or completely network-based or cloud-based. The processor 76 may or may not be located in a single physical location. Additionally, multiple processors 76 may or may not necessarily be located in a single physical location.
  • the one or more image capturing input devices 78 may be capable of receiving information input from a user and/or processor(s), and transmitting such information to the processor 76 .
  • the one or more image capturing input devices 78 may include, but are not limited to, implementation as a keyboard, touchscreen, mouse, trackball, microphone, fingerprint reader, infrared port, slide-out keyboard, flip-out keyboard, cell phone, PDA, video game controller, remote control, fax machine, network interface, speech recognition, gesture recognition, eye tracking, brain-computer interface, combinations thereof, and/or the like.
  • the one or more image capturing output devices 80 may be capable of outputting information in a form perceivable by a user and/or processor(s).
  • the one or more image capturing output devices 80 may include, but are not limited to, implementations as a computer monitor, a screen, a touchscreen, a speaker, a website, a television set, a smart phone, a PDA, a cell phone, a fax machine, a printer, a laptop computer, an optical head-mounted display (OHMD), combinations thereof, and/or the like.
  • the one or more image capturing input devices 78 and the one or more image capturing output devices 80 may be implemented as a single device, such as, for example, a touchscreen or a tablet.
  • Each of the data signals 52 , 54 , 60 , 62 , 64 , 66 , 68 , 70 , 72 , and/or 74 may be provided to the image capturing and processing computer system 50 .
  • each of the data signals 52 , 54 , 60 , 62 , 64 , 66 , 68 , 70 , 72 , and/or 74 may be received by the image capturing and processing computer system 50 via the I/O port 82 .
  • the I/O port 82 may comprise one or more physical and/or virtual ports.
  • the image capturing and processing computer system 50 may be in communication with one or more additional processors 84 as illustrated in FIG. 4 .
  • the image capturing and processing computer system 50 may communicate with the one or more additional processors 84 via a network 86 .
  • the terms “network-based”, “cloud-based”, and any variations thereof, may include the provision of configurable computational resources on demand via interfacing with a computer and/or computer network, with software and/or data at least partially located on the computer and/or computer network, by pooling processing power of two or more networked processors.
  • the network 86 may be the Internet and/or other network.
  • a primary user interface of the image capturing software and/or image manipulation software may be delivered through a series of web pages. It should be noted that the primary user interface of the image capturing software and/or image manipulation software may be replaced by another type of interface, such as, for example, a Windows-based application.
  • the network 86 may be almost any type of network.
  • the network 86 may interface by optical and/or electronic interfaces, and/or may use a plurality of network topographies and/or protocols including, but not limited to, Ethernet, TCP/IP, circuit switched paths, and/or combinations thereof.
  • the network 86 may be implemented as the World Wide Web (or Internet), a local area network (LAN), a wide area network (WAN), a metropolitan network, a wireless network, a cellular network, a Global System for Mobile Communications (GSM) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a satellite network, a radio network, an optical network, a cable network, a public switched telephone network, an Ethernet network, combinations thereof, and/or the like.
  • GSM Global System for Mobile Communications
  • CDMA code division multiple access
  • 3G network Third Generation
  • 4G fourth generation
  • satellite network a radio network
  • an optical network a cable network
  • public switched telephone network an Ethernet network, combinations thereof, and/or the like.
  • the network 86 may use a variety of network protocols to permit bi-directional interface and/or communication of data and/or information. It is conceivable that in the near future, embodiments of the present disclosure may use more advanced networking topologies.
  • the image capturing and processing computer system 50 may be capable of interfacing and/or communicating with the one or more computer systems including processors 84 via the network 86 . Additionally, the one or more processors 84 may be capable of communicating with each other via the network 86 .
  • the processors 84 may include, but are not limited to implementation as a variety of different types of computer systems, such as a server system having multiple servers in a configuration suitable to provide a commercial computer based business system (such as a commercial web-site and/or data center), a personal computer, a smart phone, a network-capable television set, a television set-top box, a tablet, an e-book reader, a laptop computer, a desktop computer, a network-capable handheld device, a video game console, a server, a digital video recorder, a DVD player, a Blu-Ray player, a wearable computer, a ubiquitous computer, combinations thereof, and/or the like.
  • a commercial computer based business system such as a commercial web-site and/or data center
  • a personal computer such as a commercial web-site and/or data center
  • a smart phone such as a commercial web-site and/or data center
  • a network-capable television set such as a commercial web-site and/or data center
  • the computer systems comprising the processors 84 may include one or more input devices 88 , one or more output devices 90 , processor executable code, and/or a web browser capable of accessing a website and/or communicating information and/or data over a network, such as network 86 .
  • the computer systems comprising the one or more processors 84 may include one or more non-transient memory comprising processor executable code and/or software applications, for example.
  • the image capturing and processing computer system 50 may be modified to communicate with any of these processors 84 and/or future developed devices capable of communicating with the image capturing and processing computer system 50 via the network 86 .
  • the one or more input devices 88 may be capable of receiving information input from a user, processors, and/or environment, and transmit such information to the processor 84 and/or the network 86 .
  • the one or more input devices 88 may include, but are not limited to, implementation as a keyboard, touchscreen, mouse, trackball, microphone, fingerprint reader, infrared port, slide-out keyboard, flip-out keyboard, cell phone, PDA, video game controller, remote control, fax machine, network interface, speech recognition, gesture recognition, eye tracking, brain-computer interface, combinations thereof, and/or the like.
  • the one or more output devices 90 may be capable of outputting information in a form perceivable by a user and/or processor(s).
  • the one or more output devices 90 may include, but are not limited to, implementations as a computer monitor, a screen, a touchscreen, a speaker, a website, a television set, a smart phone, a PDA, a cell phone, a fax machine, a printer, a laptop computer, an optical head-mounted display (OHMD), combinations thereof, and/or the like.
  • the one or more input devices 88 and the one or more output devices 90 may be implemented as a single device, such as, for example, a touchscreen or a tablet.
  • the image capturing and processing computer system 50 may include one or more processors 76 working together, or independently to execute processor executable code, and one or more memories 92 capable of storing processor executable code.
  • each element of the image capturing and processing computer system 50 may be partially or completely network-based or cloud-based, and may or may not be located in a single physical location.
  • the one or more processors 76 may be implemented as a single or plurality of processors working together, or independently, to execute the logic as described herein. Exemplary embodiments of the one or more processors 76 may include, but are not limited to, a digital signal processor (DSP), a central processing unit (CPU), a field programmable gate array (FPGA), a microprocessor, a multi-core processor, and/or combination thereof, for example.
  • DSP digital signal processor
  • CPU central processing unit
  • FPGA field programmable gate array
  • microprocessor e.g., a microprocessor
  • multi-core processor e.g., multi-core processor, and/or combination thereof, for example.
  • the one or more processors 76 may be capable of communicating via the network 86 , illustrated in FIG. 4 , by exchanging signals (e.g., analog, digital, optical, and/or the like) via one or more ports (e.g., physical or virtual ports) using a network protocol.
  • the processors 76 may be located remotely from one another, in the same location, or comprising a unitary multi-core processor.
  • the one or more processors 76 may be capable of reading and/or executing processor executable code and/or capable of creating, manipulating, retrieving, altering, and/or storing data structures into one or more memories 92 .
  • the one or more memories 92 may be capable of storing processor executable code. Additionally, the one or more memories 92 may be implemented as a conventional non-transitory memory, such as, for example, random access memory (RAM), a CD-ROM, a hard drive, a solid state drive, a flash drive, a memory card, a DVD-ROM, a floppy disk, an optical drive, combinations thereof, and/or the like, for example.
  • RAM random access memory
  • the one or more memories 92 may be located in the same physical location as the image capturing and processing computer system 50 .
  • one or more memories 92 may be located in a different physical location as the image capturing and processing computer system 50 , with the image capturing and processing computer system 50 communicating with one or more memories 92 via a network such as the network 86 , for example.
  • one or more of the memories 92 may be implemented as a “cloud memory” (i.e., one or more memories 92 may be partially or completely based on or accessed using a network, such as network 86 , for example).
  • the one or more memories 92 may store processor executable code and/or information comprising one or more databases 94 and program logic 96 (i.e., computer executable logic).
  • the processor executable code may be stored as a data structure, such as a database and/or data table, for example.
  • one of the databases 94 can be a geospatial database storing aerial images
  • another one of the database 94 can store point clouds
  • another one of the database 94 can store the internal and external orientation information for geo-referencing the images within the geospatial database.
  • the image capturing and processing computer system 50 may execute the program logic 96 which may control the reading, manipulation, and/or storing of data signals 52 , 54 , 60 , 62 , 64 , 66 , 68 , 70 , 72 , and/or 74 .
  • the program logic may read data signals 52 and 54 , and may store them within the one or more memories 92 .
  • Each of the signals 60 , 62 , 64 , 66 , 68 , 70 , 72 and 74 may represent the conditions existing at the instance that an oblique image and/or nadir image is acquired and/or captured by the one or more image capturing devices 34 .
  • the image capturing and processing computer system 50 may issue an image capturing signal to the one or more image capturing devices 34 to thereby cause those devices to acquire and/or capture an oblique image and/or a nadir image at a predetermined location and/or at a predetermined interval. In some embodiments, the image capturing and processing computer system 50 may issue the image capturing signal dependent on at least in part on the velocity of the platform 30 . Additionally, the image capturing and processing computer system 50 may issue a point collection signal to the LIDAR scanner 36 to thereby cause the LIDAR scanner to collect points at a predetermined location and/or at a predetermined interval.
  • Program logic 96 of the image capturing and processing computer system 50 may decode, as necessary, and/or store the aforementioned signals within the memory 92 , and/or associate the data signals with the image data signals 52 corresponding thereto, or the LIDAR scanner signal 54 corresponding thereto.
  • the altitude, orientation, roll, pitch, yaw, and the location of each image capturing device 34 relative to the ground surface 32 and/or object of interest for images captured may be known. More particularly, the [X,Y,Z] location (e.g., latitude, longitude, and altitude) of an object or location seen within the images or location seen in each image may be determined.
  • the altitude, orientation, roll, pitch, yaw, and the location of the LIDAR scanner 36 relative to the ground surface 32 and/or object of interest for collection of data points may be known. More particularly, the [X,Y,Z] location (e.g., latitude, longitude, and altitude) of a targeted object or location may be determined. In some embodiments, location data for the targeted object or location may be catalogued within one or more database for retrieval.
  • the platform 30 may be piloted and/or guided through an image capturing path that may pass over a particular area of the ground surface 32 .
  • the number of times the platform 30 and/or the one or more image capturing devices 34 and LIDAR scanner 36 pass over the area of interest may be dependent at least in part upon the size of the area and the amount of detail desired in the captured images.
  • a number of images may be captured by the one or more image capturing devices 34 and data points may be captured by the LIDAR scanner 36 (optional).
  • the images may be captured and/or acquired by the one or more image capturing devices 34 at predetermined image capture intervals that may be dependent, at least in part, upon the velocity of the platform 30 .
  • the safe flying height for a fixed wing aircraft may be a minimum clearance of 2,000′ above the ground surface 32 , and may have a general forward flying speed of 120 knots.
  • oblique image-capturing devices may capture 1 cm to 2 cm ground sample distance imagery
  • vertical image-capturing devices may be capable of capturing 2 cm to 4 cm ground sample distance imagery.
  • the image data signals 52 corresponding to each image acquired and the data point via the LIDAR scanner 36 may be received by and/or stored within the one or more memories 92 of the image capturing and processing computer system 50 via the I/O port 82 .
  • the signals 60 , 62 , 64 , 66 , 68 , 70 , 72 and 74 corresponding to each captured image may be received and stored within the one or more memories 92 of the image capturing and processing computer system 50 via the I/O port 82 .
  • the LIDAR scanner signals 54 may be received and stored as LIDAR 3D point clouds.
  • the location of the one or more image capturing devices 34 relative to the ground surface 32 at the precise moment each image is captured is recorded within the one or more memories 92 and associated with the corresponding captured oblique and/or nadir image. Additionally, location data associated with one or more object of interest may be catalogued and stored within one or more database.
  • the processor 76 may create and/or store in the one or more memories 92 , one or more output image and data files.
  • the processor 76 may convert image data signals 52 , signals 60 , 62 , 64 , 66 , 68 , 70 , 72 and 74 , and LIDAR scanner signals 54 into computer-readable output image, data files, and LIDAR 3D point cloud files.
  • the output image, data files, and LIDAR 3D point cloud files may include a plurality of captured image files corresponding to captured oblique and/or nadir images, positional data, and/or LIDAR 3D point clouds corresponding thereto.
  • data associated with the one or more object of interest and/or images may be catalogued and saved within one or more database.
  • location information and/or metadata may be catalogued and saved within one or more database.
  • Output image, data files, and LIDAR 3D point cloud files may then be further provided, displayed and/or used for obtaining measurements of and between objects depicted within the captured images, including measurements of variable distribution.
  • the image capturing and processing computer system 50 may be used to provide, display and/or obtain measurements of and between objects depicted within the captured images.
  • the image capturing and processing computer system 50 may deliver the output image, data files, and/or LIDAR 3D point clouds to one or more processors, such as, for example, the processors 84 illustrated in FIG. 4 for processors 84 to provide, display and/or obtain measurement.
  • delivery of the output image, data files, and/or LIDAR 3D point cloud files may also be by physical removal of the files from the image capturing and processing computer system 50 .
  • the output image, data files, and/or LIDAR 3D point cloud files may be stored on a removable storage device and transported to one or more processors 84 .
  • the image capturing and processing computer system 50 may provide at least a portion of the display and/or determine at least a portion of the measurements further described herein.
  • the following description for measurement of objects of interest as described herein includes reference to residential housing wherein the roof is the object of interest; however, it should be understood by one skilled in the art that the methods described herein may be applied to any structure and/or object of interest.
  • the methods may be applied to any man made and/or natural object (e.g., commercial building structure, tree, driveway, road, bridge, concrete, water, turf and/or the like).
  • FIGS. 5-9 illustrate exemplary images that are annotated to explain how embodiments of the present disclosure automatically locate objects of interest and automatically generate building outlines reports about the objects of interest that can be used for a variety of purposes including automated building reports, change detection by comparing building outlines generated from imagery of the same building but captured at different times, and steering of mosaic cutlines through portions of the imagery that do not contain a building outline.
  • the region 104 includes one or more objects of interest with each building 106 being an object of interest. Roofs 108 of each building are features of the objects of interest.
  • the object(s) of interest and/or feature(s) of the object(s) of interest may be any natural and/or man-made objects within the image.
  • buildings 106 as the objects of interest, and roofs 108 , as the features of the objects of interest, will be used in the following description.
  • any object of interest within an image may be used including man-made or non-man made objects (e.g., natural objects).
  • objects of interest absent within an image may also be determined using systems and methods described herein.
  • alterations in a footprint of trees may be determined (i.e., loss of trees) using systems and methods described herein.
  • the output image file and data files may be used to geo-reference the collected images.
  • Exemplary methods for geo-referencing the imagery may be found in at least U.S. Pat. Nos. 7,424,133 and 5,247,356, which are hereby incorporated by reference in their entirety.
  • Geo-referencing each image results in information that can be used with predetermined algorithms, such as a single-ray projection algorithm, to determine three dimensional geographical coordinates for points in each image.
  • predetermined algorithms such as a single-ray projection algorithm
  • Using the internal and external orientation information also permits the real-world three-dimensional position of pixels to be determined using multiple images and stereophotogrammetry techniques.
  • a point cloud may be generated as discussed above using geo-referenced images and/or LIDAR data points.
  • FIG. 6 illustrates a screen shot 110 of a point cloud 112 of the region 104 having the building 106 .
  • the point cloud 112 may be generated by extracting points with determined geographical coordinates using two or more images and geo-referenced data obtained from the image capturing and processing computer system 50 . Using a known or calculated distance between capture locations of the one or more image capturing devices 34 and stereo photogrammetry using multiple images, three dimensional points having three dimensional distances from the one or more image capturing devices 34 may be determined. In some embodiments, stereo analysis using standard stereo pair photogrammetry techniques may be automated. In each image, a geographical coordinate, such as (x,y,z) may correspond to a data point in the point cloud 112 . As such, each data point has a three-dimensional coordinate.
  • the LIDAR 3D data point files may be processed and geo-referenced providing the point cloud 112 .
  • the LIDAR 3D data point files may be processed and geo-referenced using software such as Reigl's RiProcess application, distributed by Reigl located in Horn, Austria.
  • images and georeferenced data in addition to LIDAR data point files, may be used to generate the point cloud.
  • images and georeferenced data, in lieu of LIDAR data point files may be used to generate the point cloud 112 .
  • natural and man-made structures may be above ground (i.e., ground surface 32 illustrated in FIG. 2 ).
  • the point cloud 112 may be analyzed to identify data points within the point cloud 112 at particular elevations above ground and/or classify data points within the point cloud at particular elevations.
  • the point cloud 112 may be analyzed to identify and classify ground structures 114 and/or non-ground structures (i.e., those structures above the ground including man-made objects and natural objects).
  • areas within the point cloud 112 indicative of the ground structures 114 may be classified as the ground structure 114 and/or removed such that further analysis can be directed to natural and/or man-made structures depicted in the point cloud 112 above the ground structure 114 .
  • data points at particular elevations may be isolated, classified and/or removed from the point cloud 112 .
  • data points within the point cloud 112 not associated with structures at an elevation of interest 116 may be removed such that only data points of structures at the elevation of interest 116 remain within a modified point cloud 118 .
  • FIG. 7 illustrates a screen shot 120 of the modified point cloud 118 wherein data points at the elevation of interest 116 (i.e., height h) were identified from the point cloud 112 .
  • data points not associated with structures at the elevation of interest 116 shown in FIG. 6 were classified as not being associated with structures, and then optionally removed from the point cloud 112 to obtain the modified point cloud 118 shown in FIG. 7 .
  • the elevation of interest 116 may be an average or estimated elevation.
  • FIG. 7 illustrates a screen shot 120 of a modified point cloud 118 showing data points at the elevation of interest 116 of an estimated height h of a roof from ground level shown in FIG. 6 .
  • the elevation of interest 116 may be determined by analysis of the elevation gradient of the data points.
  • an object of interest may be detected using the point cloud and the elevation gradient.
  • the object of interest may be the building 106 .
  • the remaining data points at the elevation of interest 116 may be used to classify structures in the modified point cloud 118 as man-made structures 122 or natural structures 124 with each data point having a unique three dimensional point (x,y,z) within the modified point cloud 118 .
  • spatial relationships between each data point may be analyzed to classify such structures as man-made structures 122 (i.e., caused by humankind) or natural structures 124 (i.e., not made or caused by humankind).
  • the variation of surface normal direction between each point of a group of points and the other points may be analyzed to differentiate man-made structures 122 and natural structures 124 .
  • This analysis can be implemented by analyzing a group of points within a local area surrounding particular points within the modified point cloud 118 to determine an orientation of a plane fitting the group of points and assigning this value(s) to the particular point. This process can be repeated for either all of the points in the modified point cloud 118 that have been classified as either a man-made structure or a natural structure or a subset of the points.
  • the local area can be a matrix of pixels that is a subset of the modified point cloud. The size of the matrix can be varied.
  • the matrix has a number of pixels that can be within a range of 0.1% to about 10% of the number of pixels within the modified point cloud 118 .
  • the local area can be a 25 ⁇ 25 pixel matrix or the like.
  • Data points having variation that is below the pre-determined amount may be classified as a man-made structure 122 .
  • Each of the data points within the modified point cloud may be analyzed and data points positioned at or above the elevation of interest 116 may be classified as natural structures 124 or man-made structures 122 rather than the ground structure 114 .
  • each of the data points at the elevation of interest 116 and classified as a part of a man-made structure 122 may be further identified as part of an identifiable structure within an image.
  • groupings of data points within Region A classified as part of man-made structures 122 may be further classified as building 106 a ;
  • groupings of data points within Region B classified as part of man-made structures 122 may be further classified as building 106 b ;
  • groupings of data points within Region C classified as part of man-made structures 122 may be further classified as buildings 106 c and 106 e ;
  • groupings of data points within Region D classified as part of man-made structures 122 may be further classified as buildings 106 d and 106 f .
  • Groupings of data points can be classified as particular buildings by detecting building outlines. Once the building outlines are detected, data points within the building outline are classified as part of the building.
  • FIG. 9 illustrates a screen shot 136 of building 106 d .
  • edges of the building 106 d may be detected within the images using any standard edge detection algorithm.
  • Standard edge detection algorithms may include, but are not limited to, a Laplacian filter, and/or the like.
  • spectral analysis of the raster content can be used to identify whether data points within the modified point cloud 118 are likely identifying an object of interest (e.g., buildings 106 ).
  • object of interest e.g., buildings 106
  • the automated roof detection system described in U.S. Pat. No. 9,070,018, which is hereby incorporated by reference in its entirety, may be used to identify the objects of interest within images.
  • the automated roof detection system is described in relation to detection of roofs, it should be noted that such system may apply to identification of any object of interest (e.g., man-made or natural structures).
  • the automated object detection system may use statistical analysis to analyze sections of the images that depict the object of interest, as previously determined within the modified point cloud 118 .
  • the automated object detection system may further refine the boundary of the buildings 106 and/or roof 130 . Statistical measures may be computed for sections of each image.
  • FIGS. 10-12 illustrate an exemplary process for determining group descriptor patterns used to identify objects of interest within an image for automated object classification.
  • the rough outline of the object can be refined by correlating the locations of the data points in the point cloud 112 or the modified point cloud 118 with images from one or more of the databases 94 , and then analyzing the raster content within one or more images showing the object to more precisely locate the outline of the object.
  • the value of each pixel within the image is part of a vector comprising the values determined for each of the measures calculated for that pixel within an n-dimensional feature space as shown in FIG. 10 .
  • Image measures may include, for example, localized measures and neighborhood measures. Localized measures describe data points at a particular processing point. Example localized measures may include, but are not limited to, surface fractal analysis, topological contour complexity variation, and/or the like. Neighborhood measures may include ranging related neighborhood measures that describe the organization of structure surrounding a particular processing point. Exemplary neighborhood measures may include, but are not limited to, radial complexity, radial organization variation, and/or the like. It should be noted that the number of descriptor groups may not necessarily be the same as the number of measures.
  • Descriptor groups may form a new feature vector that includes any error and/or uncertainty (e.g., integrity of the point cloud) of the measurements folded into a space describing the average statistical value of a structure or feature of a structure (e.g., building 106 , roof).
  • each descriptor group may be compared to one or more descriptor groups using a statistical model to create a descriptor pattern of inter-relational characteristics.
  • each descriptor group i.e., cluster
  • a statistical model such as an associative neural network.
  • the group descriptor patterns may serve as the basis for identifying and/or classifying objects of interest and/or features of interest within the modified point cloud 118 .
  • An average group descriptor pattern may be generated and cross correlated using an associated neural network, creating a single pattern template (e.g., a homogenous pattern) that may be used to determine which regions of the image are likely building 106 and/or roof 130 .
  • the trained neural networks may then use the pattern template to discriminate between the building 106 and/or roof 130 and other surrounding environment. For example, the trained neural networks may use the homogenous pattern template to discriminate between roof 130 and non-roof areas.
  • second information e.g., a boundary 138 (e.g., outline) may be determined for the building 106 (or roof 130 ) as shown in FIG. 9 .
  • locations within the raster content may be converted to vector form.
  • the trained neural networks may distinguish between roof 130 and non-roof areas giving an estimated boundary for the roof 130 .
  • Such boundary may be defined in raster format.
  • building corners 137 may be identified in the raster format and used to convert each line segment of the boundary 138 into vector form.
  • the boundary 138 may be stored in one of the databases 94 (shown in FIG. 4 ) and/or further analyzed.
  • the raster content may further be geo-referenced to real-world locations.
  • the building 106 may be identified by Latitude/Longitude.
  • the building 106 may be extracted from the image as shown in FIG. 13 .
  • Such information may be stored as individual files for each building (e.g., .shp file) in the database 94 , or may be stored as multiple files within the database 94 .
  • features of interest within an image may be identified on the objects of interest (e.g., building 106 a - 106 d ) via the modified point cloud 118 , the point cloud 112 , or the original image(s).
  • identification of features of interest may be through the automated object detection system as described herein and in U.S. Pat. No. 9,070,018.
  • identification of features of interest may be identified via the original image subsequent to the automated classification of the object of interest via methods and systems as described, for example, U.S. Pat. Nos. 8,977,520 and 7,424,133, which are both hereby incorporated by reference in their entirety.
  • elements within the feature of interest may be further classified.
  • FIG. 14 illustrates a screen shot 132 of roofs 130 a - 130 d .
  • Line segments 134 forming the roof may be further classified using predefined roof elements including, but not limited to, a rake, a hip, a valley, an eave, a ridge, and/or the like.
  • one or more features of interest may be further analyzed and/or measurements determined.
  • features of the roof 130 , dimensions of the roof 130 , and/or roof pitch may be determined using data points within the point cloud 112 , the modified point cloud 118 and/or the one or more images.
  • roof pitch may be calculated using data points of the point cloud 112 as each data point coordinates with a geographical reference (x,y,z). Using data points of the point cloud 112 located at an elevation at the top of a peak and data points located at an elevation at the bottom of a peak, roof pitch may be calculated.
  • Roof feature calculations may include, but are not limited to, dimensions of eaves, edges, ridge, angles, and/or the like.
  • Characteristics of the roof 130 such as features of the roof 130 , dimensions of the roof 13 , roof pitch, condition and/or composition of the roof 130 may be determined by analyzing the one or more images. For example, if the one more images were obtained from a manned aerial vehicle and included multi-spectral or hyper spectral information, then condition or composition of the roof 130 may also be determined.
  • the capture platform may also be a drone, and in this instance the resolution of the one or more images may be sufficient to determine the condition (e.g., moss, excessive moisture, hail damage, etc.) or composition (tile, composition, wood, slate, etc.) of the roof 130 .
  • boundaries may be used to identify changes to man-made and/or natural objects within an image.
  • boundaries 138 (as shown in FIG. 9 ) may be used to identify changes to buildings 106 and/or roof 130 over time.
  • boundaries and/or extraction techniques may be used in forming one or more mosaic models of the object of interest.
  • boundaries 138 and/or extraction techniques as described herein may be used in forming a mosaic model of the building 106 .
  • Images illustrating each side of the building 106 may be extracted using the methods as described herein. Such images may be composed into a three-dimensional mosaic model illustrating the building 106 .
  • boundaries and/or analysis of the features within the boundaries may be analyzed and/or described within a report.
  • a customer and/or contractor may receive a report regarding evaluation of object(s) of interest and/or feature(s) of interest.
  • the customer and/or contractor may receive a report regarding evaluation of the building 106 and/or roof 130 .
  • FIG. 15 illustrates an exemplary embodiment of a roof report 140 .
  • the program logic 96 may provide for one or more of the processors 84 interfacing with the image capturing and processing computer system 50 over the network 86 to provide one or more roof reports 140 .
  • the roof report 140 may include, but is not limited to, one or more data sets 142 regarding roof pitch, total area, eave length, hip ridge length, valley length, number of box vents, and/or the like. Additionally, the roof report 140 may include one or more images 144 of the building 106 and/or roof 130 . Such images 144 may be automatically provided to the roof report 140 via extraction of the building 106 and/or roof 130 as described herein. Additionally, the roof report 140 may include a customer information data set 146 (e.g., customer name and contact information), estimated area detail, contractor data set 148 (e.g., contractor name and contract information), and/or the like.
  • customer information data set 146 e.g., customer name and contact information
  • estimated area detail e.g., contractor name and contract information
  • determination, analysis and measurements of data associated with the object of interest and/or features of interest may be catalogued and stored in one or more database for retrieval.
  • Data cataloged and stored associated with the object of interest and/or feature of interest may include, but is not limited to, location data (e.g., object of interest, each point of the object of interest), date and/or time of image creation, algorithms used, measurements, metadata, footprint, and/or the like.
  • boundaries associated with the object of interest and/or feature of interest may be spatially catalogued using location (e.g., coordinate data, address).

Abstract

An automated method performed by at least one processor running computer executable instructions stored on at least one non-transitory computer readable medium, comprising: classifying first data points identifying at least one man-made roof structure within a point cloud and classifying second data points associated with at least one of natural structures and ground surface to form a modified point cloud; identifying at least one feature of the man-made roof structure in the modified point cloud; and generating a roof report including the at least one feature.

Description

INCORPORATION BY REFERENCE
The present patent application claims priority to the provisional patent application identified by U.S. Ser. No. 62/295,336, filed on Feb. 15, 2016, entitled “Automated System and Methodology for Feature Extraction”, and to the provisional patent application identified by U.S. Ser. No. 62/411,284, filed on Oct. 21, 2016, entitled “Automated System and Methodology for Feature Extraction,” the entire contents of all of which are hereby incorporated herein by reference.
BACKGROUND
Feature extraction within images holds a multitude of uses over multiple industries. The identification of elements or features within an image, or even absent from an image, provides valuable information. Prior art uses of human identification, however, wastes time and energy, in addition to variances between human extractors.
For example, residential and/or commercial property owners approaching a major roofing project may be unsure of the amount of material needed and/or the next step in completing the project. Generally, such owners contact one or more contractors for a site visit. Each contractor must physically be present at the site of the structure in order to make a determination on material needs and/or time. The time and energy for providing such an estimate becomes laborious and may be affected by contractor timing, weather, contractor education, and the like. Estimates may be varied even between contractors in determination of estimated square footage causing variance in supply ordering as well. Additionally, measuring an actual roof may be costly and potentially hazardous—especially with steeply pitched roofs. Completion of a proposed roofing project may depend on ease in obtaining a simplified roofing estimate and/or obtaining reputable contractors for the roofing project.
Remote sensing technology has the ability to be more cost effective than manual inspection while providing pertinent information for assessment of roofing projects. Images are currently being used to measure objects and structures within the images, as well as to be able to determine geographic locations of points within the image when preparing estimates for a variety of construction projects, such as roadwork, concrete work, and roofing. See for example U.S. Pat. No. 7,424,133 that describes techniques for measuring within oblique images. Also see for example U.S. Pat. No. 8,145,578 that describe techniques for allowing the remote measurements of the size, geometry, pitch and orientation of the roof sections of the building and then uses the information to provide an estimate to repair or replace the roof, or to install equipment thereon. Estimating construction projects using software increases the speed at which an estimate is prepared, and reduces labor and fuel costs associated with on-site visits.
Further, feature extraction, or cataloguing feature extraction, can go beyond features represented within an image and provide useful information on features missing from an image. For example, tree density within a forest, or changes to the tree density over time, may be determined using lack of trees, a feature within a known area of an image. Thus, the location of missing feature may be determined relevant in feature extraction of the image.
BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS
To assist those of ordinary skill in the relevant art in making and using the subject matter hereof, reference is made to the appended drawings, which are not intended to be drawn to scale, and in which like reference numerals are intended to refer to similar elements for consistency. For purposes of clarity, not every component may be labeled in every drawing.
FIG. 1 illustrates a block diagram for automatic feature extraction of one or more natural and/or man-made structures within an image, in accordance with the present disclosure.
FIG. 2 illustrates a schematic diagram of hardware forming an exemplary embodiment of a system for automatic feature extraction of one or more natural and/or man-made structures within an image. The system includes an image capturing system and a computer system.
FIG. 3 illustrates a diagrammatic view of an example of the image-capturing system of FIG. 2.
FIG. 4 illustrates a block diagram of the image-capturing computer system of FIG. 3 communicating via a network with multiple processors.
FIG. 5 illustrates a screen shot of an image of a region having multiple objects of interest in accordance with the present disclosure.
FIG. 6 illustrates a screen shot of a point cloud of the region illustrated in FIG. 5.
FIG. 7 illustrates a screen shot of a modified point cloud having data points at an elevation of interest.
FIG. 8 illustrates a screen shot of buildings identified using data points of a point cloud in accordance with the present disclosure.
FIG. 9 illustrates a screen shot of an image showing boundaries on a building identified using the modified point cloud of FIG. 7.
FIGS. 10-12 illustrate an exemplary method for automated object detection in accordance with the present disclosure.
FIG. 13 illustrates the building of FIG. 9 extracted from the image.
FIG. 14 illustrates a screen shot of roof features identified in accordance with the present disclosure.
FIG. 15 illustrates an exemplary roof report generated in accordance with the present disclosure.
DETAILED DESCRIPTION
Before explaining at least one embodiment of the disclosure in detail, it is to be understood that the disclosure is not limited in its application to the details of construction, experiments, exemplary data, and/or the arrangement of the components set forth in the following description or illustrated in the drawings unless otherwise noted.
The disclosure is capable of other embodiments or of being practiced or carried out in various ways. For example, although the roofing industry may be used as an example, feature extraction in one or more images in any industry is contemplated. Additionally, identification of features absent within one or more images is also contemplated. Also, it is to be understood that the phraseology and terminology employed herein is for purposes of description, and should not be regarded as limiting.
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
As used in the description herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variations thereof, are intended to cover a non-exclusive inclusion. For example, unless otherwise noted, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements, but may also include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Further, unless expressly stated to the contrary, “or” refers to an inclusive and not to an exclusive “or”. For example, a condition A or B is satisfied by one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the inventive concept. This description should be read to include one or more, and the singular also includes the plural unless it is obvious that it is meant otherwise. Further, use of the term “plurality” is meant to convey “more than one” unless expressly stated to the contrary.
As used herein, any reference to “one embodiment,” “an embodiment,” “some embodiments,” “one example,” “for example,” or “an example” means that a particular element, feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment. The appearance of the phrase “in some embodiments” or “one example” in various places in the specification is not necessarily all referring to the same embodiment, for example.
Circuitry, as used herein, may be analog and/or digital components, or one or more suitably programmed processors (e.g., microprocessors) and associated hardware and software, or hardwired logic. Also, “components” may perform one or more functions. The term “component,” may include hardware, such as a processor (e.g., microprocessor), an application specific integrated circuit (ASIC), field programmable gate array (FPGA), a combination of hardware and software, and/or the like. The term “processor” as used herein means a single processor or multiple processors working independently or together to collectively perform a task.
Software may include one or more computer readable instructions that when executed by one or more components cause the component to perform a specified function. It should be understood that the algorithms described herein may be stored on one or more non-transitory computer readable medium. Exemplary non-transitory computer readable mediums may include random access memory, read only memory, flash memory, and/or the like. Such non-transitory computer readable mediums may be electrically based, optically based, and/or the like.
It is to be further understood that, as used herein, the term user is not limited to a human being, and may comprise, a computer, a server, a website, a processor, a network interface, a human, a user terminal, a virtual computer, combinations thereof, and the like, for example.
Referring now to the Figures, and in particular to FIG. 1, shown therein is a flow chart 10 of an exemplary method for automatically extracting features of one or more natural and/or man-made structures within an image. Using a combination of point cloud data and spectral analysis, features within one or more images may be extracted. Alternatively, using a combination of point cloud data and spectral analysis, missing features within an image may be determined. Further, in some embodiments, features within images and/or missing features within images may be catalogued (e.g., spatial cataloguing) within one or more databases for retrieval and/or analysis. For example, measurements of features (e.g., size of building(s), height of tree(s), footprint(s) of man-made or non-man made features) may be obtained. In some embodiments, such features may be stored in one or more database with measurements of features associated therewith, in addition to other metadata associated with the feature and/or image (e.g., date, algorithms used).
Generally, one or more images having raster data depicting an object of interest (e.g., a building) may be obtained and stored in a geospatial database to be available for use in creating and/or interpreting a point cloud that overlaps with the one or more images, as shown in step 12. The raster data depicting the object of interest may depict colors of visible light in mostly three bands (red, green, and blue) or one or more other modalities. For example, the raster data may include information for hyperspectral imaging which collects and processes information from across the electromagnetic spectrum. In some embodiments, the raster data may include near infrared data or thermal data. Each image may be geo-referenced such that geographical coordinates are provided for each point in the images. Geo-referencing may include associating each image, e.g., raster data, with camera pose parameters indicative of internal orientation information of the camera, and external orientation information of the camera. Internal orientation information includes, but is not limited to, known or determinable characteristics including focal length, sensor size and aspect ratio, radial and other distortion terms, principal point offset, pixel pitch, and alignment. External orientation information includes, but is not limited to, altitude, orientation in terms of roll, pitch and yaw, and the location of the camera relative to the Earth's surface. The internal and external orientation information can be obtained in a manner described, for example, in U.S. Pat. No. 7,424,133. Alternatively, or in addition, the internal and external orientation information can be obtained from analyzing overlapping images using any suitable technique, such as bundle-adjustment. Techniques for bundle adjustment are described, for example, in U.S. Pat. Nos. 6,996,254 and 8,497,905. The internal and external orientation information can be stored within metadata of the image, or can be stored separately from the image and associated with the image utilizing any suitable technique, such as a unique code for each image stored within a look-up field within the geospatial database.
In step 14, a point cloud may be generated or obtained on and/or about the object of interest. The point cloud may be generated using a 3D scanner, a detection system that works on the principle of radar, but uses light from a laser (these systems are known in the art and identified by the acronym “Lidar”), or from the images stored in the geospatial database using the internal and external orientation information for the images. Techniques for generating a point cloud using internal and external information for the images as well as feature matching techniques are known to those skilled in the art. For example, suitable computer programs for the photogrammetric creation of point clouds include Agisoft Photoscan by Agisoft; Metigo 3D by fokus GmbH Leipzig; 123D Catch by Autodesk; Pix4Dmapper by Pix4D; and DroneMapper by DroneMapper. Point clouds include a series of points in which each point may be identified with a three-dimensional (e.g., X, Y, and Z) position. The three-dimensional position of points representing man-made or natural objects can be classified as such using the relative position of the points relative to other points, as well as the shape of a grouping of the points. Further analysis can be conducted on these points as well as images correlated with the points to extract features of the objects represented in the point cloud. In some embodiments, the point cloud may be formed such that all features having known locations within three dimensions are represented. The point cloud(s) can be saved in any suitable format, such as point data, DSM/DTM, CAD, Tiff, Autodesk cloud, GeoTiff, and KMZ, for example.
Because the three-dimensional position on the Earth of each point in the point cloud is known, in a step 16, the Z values of the points can be analyzed to determine whether the points represent the ground, or a man-made or natural object located above the ground. In some embodiments, an elevation value or an elevation gradient may be used to analyze the point cloud data to determine whether particular points represent the ground, a man-made object, or a natural object above the ground. In some embodiments, classification of ground surface and non-ground surface may be determined for each data point. Identification of features on the ground surface versus features on a non-ground surface may aid in differentiation between features within the image. For example, a shadow created by a roof may have similar characteristics to the roof and be difficult for detection within an image using spectral analysis. Identification that the shadow is on the ground surface, however, would differentiate the shadow of the roof from the actual roof.
Data points can be transformed with information classifying the type of object that the data point represents. For example, data points indicative of the ground surface may be classified within the point cloud as being part of the ground surface. In some embodiments, data points of certain type(s) of objects can be removed from the point cloud to enhance processing of the remaining type(s) of objects within a modified point cloud. For example, all of the data points classified as being part of the ground surface may be removed from the point cloud to enhance the analysis of the points representing man-made objects or natural objects.
Certain objects of interest (e.g., man-made objects or natural objects) may be identified, and in a step 18, the object of interest may be detected using the point cloud and elevation gradient. In some embodiments, man-made structures, natural structures, and ground structures may also be determined. Natural structures and ground structures may be classified, and/or removed from the point cloud forming a modified point cloud. For example, assuming that buildings are the object of interest, points in the point cloud representing the building will have a higher elevation (e.g., Z value) than points in the point cloud representing the ground surface. In this case, the points having a lower elevation Z-value than adjacent points can be classified as the ground surface, and the other points in the point cloud can be initially classified as either a man-made structure, or a natural structure. The classification can be accomplished by storing additional data within the point cloud. Even further, an elevation of interest may be determined and all data points below the elevation of interest may be classified as the ground structure, or all data points above the elevation of interest may be classified as a natural object or a man-made object.
It should be noted that a point cloud is not needed for identification of features within an image or missing features within an image as described herein. The point cloud may aid in identification of ground surface versus non-ground surface, but is not a mandatory step in determination of features within an image or features missing within an image.
In a step 20, the points within the point cloud that are initially classified as not being part of the ground structure, are further analyzed to determine whether the points represent a man-made object (e.g., a building), or a natural object, (e.g., a tree). This can be accomplished by analyzing the shape of a grouping of the points. Groupings of points having planar surfaces (e.g., roof section(s)) detectable within the point cloud can be classified as a man-made object, and groupings of points devoid of planar surfaces (e.g., tree(s)) detectable within the point cloud can be classified as a natural object. This can be accomplished by analyzing a variation of surface normal direction between each point of a group of points and the other points within the group.
Once this analysis has been accomplished, the points are classified as either representing a man-made object or a natural object. Then, points within the point cloud that represent the man-made object, for example, can be further analyzed to determine one or more features (e.g., roof) of the object of interest (e.g., building). In some embodiments, the features may be classified using the modified point cloud in which the points have been classified, and/or certain points have been removed from the point cloud. In a step 22, one or more first information, (e.g., initial boundary) of the object of interest may be determined by looking for an outer boundary of a group of points, as well as analyzing the Z value of the points and looking for differences above a threshold between the Z values of adjacent points. In some embodiments, the boundaries of the object of interest may be further determined and/or refined using a first location (e.g., latitude and longitude) of the points within the point cloud that are determined to be part of the object of interest and querying the geospatial database to obtain images having raster data depicting the object of interest. Then, standard edge detection methods and/or spectral analysis can be used to precisely determine second information, e.g., the boundary of the object of interest having second location coordinates. The second location coordinates can be X,Y pixel coordinates, or latitude/longitude and elevation. In some embodiments, the first information (e.g., initial boundary) may be initially determined using the point cloud or modified point cloud, and then refined to generate second information by registering the point cloud data or modified point cloud data with the raster data within one or more images, and then analyzing the raster data with one or more suitable image processing technique. Examples of suitable image processing techniques include, but are not limited to standard edge detection methods or spectral analysis methods.
Spectral analysis may be used to group data points of an object of interest. For example, data points of a feature within an image may have similar spectral signatures such that reflected and/or absorbed electromagnetic radiation may be similar and able to be differentiated from data points of other features within the image. For example, data points of a building may have spectral signatures different from data points of grass surrounding the building. Data points within the image having similar spectral signatures may thus be grouped to identify one or more features within the image. Exemplary spectral analysis methods are described in U.S. Pat. No. 9,070,018, the entire content of which is incorporated herein by reference.
In some embodiments, thermal analysis of the object of interest may be used to group data points of objects of interest having similar or different thermal signatures. For example, a thermographic camera may be used to obtain an image using infrared radiation. Data points may be grouped based on temperature measurements of each feature within the image. Thermal analysis may be in addition to, or in lieu of a typical image (e.g., RGB image).
In a step 24, the object of interest may be extracted, classified, and/or isolated within the image. Additionally, the object of interest may be cataloged within one or more database. Cataloging may be via address, size of feature(s) and/or object of interest, color(s) of features and/or object of interest, feature type, spatial relations (e.g., address, coordinates), and/or the like. For example, in some embodiments, the object of interest may be spatially cataloged within one or more database. To that end, one or more features of the object of interest may be isolated and/or extracted. One or more outlines of the one or more features may also be determined e.g., polygon outline of one facet of a roof. Each line of the outline may be spatially stored within a database such that retrieval may be via coordinates, address, and/or the like.
In a step 26, further analysis of the object of interest and/or the image may be performed. For example, when the present disclosure is used for analyzing a roof of a building and the data points include the three-dimensional position of the part of the object represented by the data points, data points representing the outer boundaries can be used to calculate the perimeter of the roof; data points representing a ridge and a valley bordering a roof section can be used to calculate a pitch of the roof section. These techniques can be used to calculate a variety of roof features, roof dimensions, and/or roof pitch of sections of the roof. The roof outline can be saved as a data file using any suitable format, such as a vector format, or a raster format. The calculated data and the data file of the roof outline can be saved in the geospatial database or a separate database that may or may not be correlated with the geospatial database. Further, the latitude and longitude of the data points can be used to determine a physical address of a particular object of interest (e.g., building) and such address can be stored with the calculated data. The calculated data, the image data and the address can be correlated together, and automatically used to populate a template thereby preparing a predetermined report about the object of interest including one or more images of the object of interest and calculated data about the object of interest. This methodology can be automatically executed by one or more processors as discussed herein to identify and obtain information about objects of interest for a variety of purposes. For example, the methodology can be automatically executed by one or more processors to generate reports for a plurality of objects of interest, without manual or human intervention. Thus, the presently described methodology provides a performance increase over conventional methods for generating object reports, as well as an enhancement to the operation of the processor when generating reports for one or more objects of interest.
Examples of the hardware/software for obtaining the images and performing the steps described above will now be described.
Referring to FIGS. 1 and 2, in some embodiments, an image capturing system 28 may be used to obtain the one or more images. The image capturing system 28 may include a platform 30 carrying the image capturing system 28. The platform 30 may be an airplane, unmanned aerial system, space shuttle, rocket, satellite, and/or any other suitable vehicle capable of carrying the image capturing system 28. For example, in some embodiments, the platform 30 may be a fixed wing aircraft.
The platform 30 may carry the image capturing system 28 at one or more altitudes above a ground surface 32. For example, the platform 30 may carry the image capturing system 28 over a predefined area and at one or more predefined altitudes above the Earth's surface and/or any other surface of interest. In FIG. 2, the platform 30 is illustrated carrying the image capturing system 28 at a plane P above the ground surface 32.
The platform 30 may be capable of controlled movement and/or flight. As such, the platform 30 may be manned or unmanned. In some embodiments, the platform 30 may be capable of controlled movement and/or flight along a pre-defined flight path and/or course. For example, the platform 30 may be capable of controlled movement and/or flight along the Earth's atmosphere and/or outer space.
The platform 30 may include one or more systems for generating and/or regulating power. For example, the platform 30 may include one or more generators, fuel cells, solar panels, and/or batteries for powering the image capturing system 28.
Referring to FIGS. 2 and 3, the image capturing system 28 may include one or more image capturing devices 34 configured to obtain an image from which a point cloud may be generated. In some embodiments, the image capturing system 28 may optionally include one or more LIDAR scanners 36 to generate data that can be used to create a point cloud. Additionally, in some embodiments, the image capturing system 28 may include one or more global positioning system (GPS) receivers 38, one or more inertial navigation units (INU) 40, one or more clocks 42, one or more gyroscopes 44, one or more compasses 46, and one or more altimeters 48. In some embodiments, the image capturing system 28 may include one or more thermographic cameras configured to capture one or more thermographic images. One or more of these elements of the image capturing system 28 may be interconnected with an image capturing and processing computer system 50. In some embodiments, the internal and external orientation information for the images can be determined during post processing using an image processing technique, such as bundle adjustment. In these embodiments, the image capturing system 28 may not include the one or more INU 40.
In some embodiments, the one or more image capturing devices 34 may be capable of capturing images photographically and/or electronically. The one or more image capturing devices 34 may be capable and/or configured to provide oblique and/or vertical images, and may include, but are not limited to, conventional cameras, digital cameras, digital sensors, charge-coupled devices, thermographic cameras and/or the like. In some embodiments, the one or more image capturing devices 34 may be one or more ultra-high resolution cameras. For example, in some embodiments, the one or more image capturing devices 34 may be ultra-high resolution capture systems, such as may be found in the Pictometry PentaView Capture System, manufactured and used by Pictometry International based in Henrietta, N.Y.
The one or more image capturing devices 34 may include known or determinable characteristics including, but not limited to, focal length, sensor size, aspect ratio, radial and other distortion terms, principal point offset, pixel pitch, alignment, and/or the like.
The one or more image capturing devices 34 may acquire one or more images and issue one or more image data signals 52 corresponding to one or more particular images taken. Such images may be stored in the image capturing and processing computer system 50.
The LIDAR scanner 36 may determine a distance between the platform 30 and objects on or about the ground surface 32 by illuminating a laser and analyzing the reflected light to provide data points. In some embodiments, software associated with the LIDAR scanner 36 may generate a depth map or point cloud based on the measured distance between the platform 30 on and/or about the object of interest. To that end, the LIDAR scanner 36 may issue one or more data signals 54 to the image capturing and processing computer system 50 of such data points providing a point cloud wherein each data point may represent a particular coordinate. An exemplary LIDAR scanner 36 may be the Riegl LMS-Q680i, manufactured and distributed by Riegl Laser Measurement Systems located in Horn, Austria. It should be noted that distance between the platform 30 and objects on or about the ground surface 32 may be determined via other methods including, but not limited to, stereographic methods.
In some embodiments, the LIDAR scanner 36 may be a downward projecting high pulse rate LIDAR scanning system. It should be noted that other three-dimensional optical distancing systems or intensity-based scanning techniques may be used. In some embodiments, the LIDAR scanner 36 may be optional as a point cloud may be generated using one or more images and photogrammetric image processing techniques, e.g., the images may be geo-referenced using position and orientation of the image capturing devices 34 and matched together to form the point cloud.
The GPS receiver 38 may receive global positioning system (GPS) signals 56 that may be transmitted by one or more global positioning system satellites 58. The GPS signals 56 may enable the location of the platform 30 relative to the ground surface 32 and/or an object of interest to be determined. The GPS receiver 38 may decode the GPS signals 56 and/or issue location signals 60. The location signals 60 may be dependent, at least in part, on the GPS signals 56 and may be indicative of the location of the platform 30 relative to the ground surface 32 and/or an object of interest. The location signals 60 corresponding to each image captured by the image capturing devices 34 may be received and/or stored by the image capturing and processing computer system 50 in a manner in which the location signals are associated with the corresponding image.
The INU 40 may be a conventional inertial navigation unit. The INU 40 may be coupled to and detect changes in the velocity (e.g., translational velocity, rotational velocity) of the one or more image capturing devices 34, the LIDAR scanner 36, and/or the platform 30. The INU 40 may issue velocity signals and/or data signals 62 indicative of such velocities and/or changes therein to the image capturing and processing computer system 50. The image capturing and processing computer system 50 may then store the velocity signals and/or data 62 corresponding to each image captured by the one or more image capturing devices 34 and/or data points collected by the LIDAR scanner 36.
The clock 42 may keep a precise time measurement. For example, the clock 42 may keep a precise time measurement used to synchronize events. The clock 42 may include a time data/clock signal 64. In some embodiments, the time data/clock signal 64 may include a precise time that an image is taken by the one or more image capturing devices 34 and/or the precise time that points are collected by the LIDAR scanner 36. The time data 64 may be received by and/or stored by the image capturing and processing computer system 50. In some embodiments, the clock 42 may be integral with the image capturing and processing computer system 50, such as, for example, a clock software program.
The gyroscope 44 may be conventional gyroscope commonly found on airplanes and/or within navigation systems (e.g., commercial navigation systems for airplanes). The gyroscope 44 may submit signals including a yaw signal 66, a roll signal 68, and/or a pitch signal 70. In some embodiments, the yaw signal 66, the roll signal 68, and/or the pitch signal 70 may be indicative of the yaw, roll and picture of the platform 30. The yaw signal 66, the roll signal 68, and/or the pitch signal 70 may be received and/or stored by the image capturing and processing computer system 50.
The compass 46 may be any conventional compass (e.g., conventional electronic compass) capable of indicating the heading of the platform 30. The compass 46 may issue a heading signal and/or data 72. The heading signal and/or data 72 may be indicative of the heading of the platform 30. The image capturing and processing computer system 50 may receive, store and/or provide the heading signal and/or data 72 corresponding to each image captured by the one or more image capturing devices 34.
The altimeter 48 may indicate the altitude of the platform 30. The altimeter 48 may issue an altimeter signal and/or data 74. The image capturing and processing computer system 50 may receive, store and/or provide the altimeter signal and/or data 74 corresponding to each image captured by the one or more image capturing devices 34.
Referring to FIGS. 3 and 4, the image capturing and processing computer system 50 may be a system or systems that are able to embody and/or execute the logic of the processes described herein. Logic embodied in the form of software instructions and/or firmware may be executed on any appropriate hardware. For example, logic embodied in the form of software instructions or firmware may be executed on a dedicated system or systems, or on a personal computer system, or on a distributed processing computer system, and/or the like. In some embodiments, logic may be implemented in a stand-alone environment operating on a single computer system and/or logic may be implemented in a networked environment, such as a distributed system using multiple computers and/or processors. For example, a subsystem of the image capturing and processing computer system 50 can be located on the platform 30, and another subsystem of the image capturing and processing computer system 50 can be located in a data center having multiple computers and/or processor networked together.
In some embodiments, the image capturing and processing computer system 50 may include one or more processors 76 communicating with one or more image capturing input devices 78, image capturing output devices 80, and/or I/O ports 82 enabling the input and/or output of data to and from the image capturing and processing computer system 50.
FIG. 4 illustrates the image capturing and processing computer system 50 having a single processor 76. It should be noted, however, that the image capturing and processing computer system 50 may include multiple processors 76. In some embodiments, the processor 76 may be partially or completely network-based or cloud-based. The processor 76 may or may not be located in a single physical location. Additionally, multiple processors 76 may or may not necessarily be located in a single physical location.
The one or more image capturing input devices 78 may be capable of receiving information input from a user and/or processor(s), and transmitting such information to the processor 76. The one or more image capturing input devices 78 may include, but are not limited to, implementation as a keyboard, touchscreen, mouse, trackball, microphone, fingerprint reader, infrared port, slide-out keyboard, flip-out keyboard, cell phone, PDA, video game controller, remote control, fax machine, network interface, speech recognition, gesture recognition, eye tracking, brain-computer interface, combinations thereof, and/or the like.
The one or more image capturing output devices 80 may be capable of outputting information in a form perceivable by a user and/or processor(s). For example, the one or more image capturing output devices 80 may include, but are not limited to, implementations as a computer monitor, a screen, a touchscreen, a speaker, a website, a television set, a smart phone, a PDA, a cell phone, a fax machine, a printer, a laptop computer, an optical head-mounted display (OHMD), combinations thereof, and/or the like. It is to be understood that in some exemplary embodiments, the one or more image capturing input devices 78 and the one or more image capturing output devices 80 may be implemented as a single device, such as, for example, a touchscreen or a tablet.
Each of the data signals 52, 54, 60, 62, 64, 66, 68, 70, 72, and/or 74 may be provided to the image capturing and processing computer system 50. For example, each of the data signals 52, 54, 60, 62, 64, 66, 68, 70, 72, and/or 74 may be received by the image capturing and processing computer system 50 via the I/O port 82. The I/O port 82 may comprise one or more physical and/or virtual ports.
In some embodiments, the image capturing and processing computer system 50 may be in communication with one or more additional processors 84 as illustrated in FIG. 4. In this example, the image capturing and processing computer system 50 may communicate with the one or more additional processors 84 via a network 86. As used herein, the terms “network-based”, “cloud-based”, and any variations thereof, may include the provision of configurable computational resources on demand via interfacing with a computer and/or computer network, with software and/or data at least partially located on the computer and/or computer network, by pooling processing power of two or more networked processors.
In some embodiments, the network 86 may be the Internet and/or other network. For example, if the network 86 is the Internet, a primary user interface of the image capturing software and/or image manipulation software may be delivered through a series of web pages. It should be noted that the primary user interface of the image capturing software and/or image manipulation software may be replaced by another type of interface, such as, for example, a Windows-based application.
The network 86 may be almost any type of network. For example, the network 86 may interface by optical and/or electronic interfaces, and/or may use a plurality of network topographies and/or protocols including, but not limited to, Ethernet, TCP/IP, circuit switched paths, and/or combinations thereof. For example, in some embodiments, the network 86 may be implemented as the World Wide Web (or Internet), a local area network (LAN), a wide area network (WAN), a metropolitan network, a wireless network, a cellular network, a Global System for Mobile Communications (GSM) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a satellite network, a radio network, an optical network, a cable network, a public switched telephone network, an Ethernet network, combinations thereof, and/or the like. Additionally, the network 86 may use a variety of network protocols to permit bi-directional interface and/or communication of data and/or information. It is conceivable that in the near future, embodiments of the present disclosure may use more advanced networking topologies.
The image capturing and processing computer system 50 may be capable of interfacing and/or communicating with the one or more computer systems including processors 84 via the network 86. Additionally, the one or more processors 84 may be capable of communicating with each other via the network 86.
The processors 84 may include, but are not limited to implementation as a variety of different types of computer systems, such as a server system having multiple servers in a configuration suitable to provide a commercial computer based business system (such as a commercial web-site and/or data center), a personal computer, a smart phone, a network-capable television set, a television set-top box, a tablet, an e-book reader, a laptop computer, a desktop computer, a network-capable handheld device, a video game console, a server, a digital video recorder, a DVD player, a Blu-Ray player, a wearable computer, a ubiquitous computer, combinations thereof, and/or the like. In some embodiments, the computer systems comprising the processors 84 may include one or more input devices 88, one or more output devices 90, processor executable code, and/or a web browser capable of accessing a website and/or communicating information and/or data over a network, such as network 86. The computer systems comprising the one or more processors 84 may include one or more non-transient memory comprising processor executable code and/or software applications, for example. The image capturing and processing computer system 50 may be modified to communicate with any of these processors 84 and/or future developed devices capable of communicating with the image capturing and processing computer system 50 via the network 86.
The one or more input devices 88 may be capable of receiving information input from a user, processors, and/or environment, and transmit such information to the processor 84 and/or the network 86. The one or more input devices 88 may include, but are not limited to, implementation as a keyboard, touchscreen, mouse, trackball, microphone, fingerprint reader, infrared port, slide-out keyboard, flip-out keyboard, cell phone, PDA, video game controller, remote control, fax machine, network interface, speech recognition, gesture recognition, eye tracking, brain-computer interface, combinations thereof, and/or the like.
The one or more output devices 90 may be capable of outputting information in a form perceivable by a user and/or processor(s). For example, the one or more output devices 90 may include, but are not limited to, implementations as a computer monitor, a screen, a touchscreen, a speaker, a website, a television set, a smart phone, a PDA, a cell phone, a fax machine, a printer, a laptop computer, an optical head-mounted display (OHMD), combinations thereof, and/or the like. It is to be understood that in some exemplary embodiments, the one or more input devices 88 and the one or more output devices 90 may be implemented as a single device, such as, for example, a touchscreen or a tablet.
Referring to FIG. 4, in some embodiments, the image capturing and processing computer system 50 may include one or more processors 76 working together, or independently to execute processor executable code, and one or more memories 92 capable of storing processor executable code. In some embodiments, each element of the image capturing and processing computer system 50 may be partially or completely network-based or cloud-based, and may or may not be located in a single physical location.
The one or more processors 76 may be implemented as a single or plurality of processors working together, or independently, to execute the logic as described herein. Exemplary embodiments of the one or more processors 76 may include, but are not limited to, a digital signal processor (DSP), a central processing unit (CPU), a field programmable gate array (FPGA), a microprocessor, a multi-core processor, and/or combination thereof, for example. The one or more processors 76 may be capable of communicating via the network 86, illustrated in FIG. 4, by exchanging signals (e.g., analog, digital, optical, and/or the like) via one or more ports (e.g., physical or virtual ports) using a network protocol. It is to be understood, that in certain embodiments, using more than one processor 76, the processors 76 may be located remotely from one another, in the same location, or comprising a unitary multi-core processor. The one or more processors 76 may be capable of reading and/or executing processor executable code and/or capable of creating, manipulating, retrieving, altering, and/or storing data structures into one or more memories 92.
The one or more memories 92 may be capable of storing processor executable code. Additionally, the one or more memories 92 may be implemented as a conventional non-transitory memory, such as, for example, random access memory (RAM), a CD-ROM, a hard drive, a solid state drive, a flash drive, a memory card, a DVD-ROM, a floppy disk, an optical drive, combinations thereof, and/or the like, for example.
In some embodiments, the one or more memories 92 may be located in the same physical location as the image capturing and processing computer system 50. Alternatively, one or more memories 92 may be located in a different physical location as the image capturing and processing computer system 50, with the image capturing and processing computer system 50 communicating with one or more memories 92 via a network such as the network 86, for example. Additionally, one or more of the memories 92 may be implemented as a “cloud memory” (i.e., one or more memories 92 may be partially or completely based on or accessed using a network, such as network 86, for example).
Referring to FIG. 4, the one or more memories 92 may store processor executable code and/or information comprising one or more databases 94 and program logic 96 (i.e., computer executable logic). In some embodiments, the processor executable code may be stored as a data structure, such as a database and/or data table, for example. For example, one of the databases 94 can be a geospatial database storing aerial images, another one of the database 94 can store point clouds, and another one of the database 94 can store the internal and external orientation information for geo-referencing the images within the geospatial database.
In use, the image capturing and processing computer system 50 may execute the program logic 96 which may control the reading, manipulation, and/or storing of data signals 52, 54, 60, 62, 64, 66, 68, 70, 72, and/or 74. For example, the program logic may read data signals 52 and 54, and may store them within the one or more memories 92. Each of the signals 60, 62, 64, 66, 68, 70, 72 and 74, may represent the conditions existing at the instance that an oblique image and/or nadir image is acquired and/or captured by the one or more image capturing devices 34.
In some embodiments, the image capturing and processing computer system 50 may issue an image capturing signal to the one or more image capturing devices 34 to thereby cause those devices to acquire and/or capture an oblique image and/or a nadir image at a predetermined location and/or at a predetermined interval. In some embodiments, the image capturing and processing computer system 50 may issue the image capturing signal dependent on at least in part on the velocity of the platform 30. Additionally, the image capturing and processing computer system 50 may issue a point collection signal to the LIDAR scanner 36 to thereby cause the LIDAR scanner to collect points at a predetermined location and/or at a predetermined interval.
Program logic 96 of the image capturing and processing computer system 50 may decode, as necessary, and/or store the aforementioned signals within the memory 92, and/or associate the data signals with the image data signals 52 corresponding thereto, or the LIDAR scanner signal 54 corresponding thereto. Thus, for example, the altitude, orientation, roll, pitch, yaw, and the location of each image capturing device 34 relative to the ground surface 32 and/or object of interest for images captured may be known. More particularly, the [X,Y,Z] location (e.g., latitude, longitude, and altitude) of an object or location seen within the images or location seen in each image may be determined. Similarly, the altitude, orientation, roll, pitch, yaw, and the location of the LIDAR scanner 36 relative to the ground surface 32 and/or object of interest for collection of data points may be known. More particularly, the [X,Y,Z] location (e.g., latitude, longitude, and altitude) of a targeted object or location may be determined. In some embodiments, location data for the targeted object or location may be catalogued within one or more database for retrieval.
The platform 30 may be piloted and/or guided through an image capturing path that may pass over a particular area of the ground surface 32. The number of times the platform 30 and/or the one or more image capturing devices 34 and LIDAR scanner 36 pass over the area of interest may be dependent at least in part upon the size of the area and the amount of detail desired in the captured images.
As the platform 30 passes over an area of interest, a number of images (e.g., oblique images, nadir images) may be captured by the one or more image capturing devices 34 and data points may be captured by the LIDAR scanner 36 (optional). In some embodiments, the images may be captured and/or acquired by the one or more image capturing devices 34 at predetermined image capture intervals that may be dependent, at least in part, upon the velocity of the platform 30. For example, the safe flying height for a fixed wing aircraft may be a minimum clearance of 2,000′ above the ground surface 32, and may have a general forward flying speed of 120 knots. In this example, oblique image-capturing devices may capture 1 cm to 2 cm ground sample distance imagery, and vertical image-capturing devices may be capable of capturing 2 cm to 4 cm ground sample distance imagery.
The image data signals 52 corresponding to each image acquired and the data point via the LIDAR scanner 36 may be received by and/or stored within the one or more memories 92 of the image capturing and processing computer system 50 via the I/O port 82. Similarly, the signals 60, 62, 64, 66, 68, 70, 72 and 74 corresponding to each captured image may be received and stored within the one or more memories 92 of the image capturing and processing computer system 50 via the I/O port 82. The LIDAR scanner signals 54 may be received and stored as LIDAR 3D point clouds.
Thus, the location of the one or more image capturing devices 34 relative to the ground surface 32 at the precise moment each image is captured is recorded within the one or more memories 92 and associated with the corresponding captured oblique and/or nadir image. Additionally, location data associated with one or more object of interest may be catalogued and stored within one or more database.
The processor 76 may create and/or store in the one or more memories 92, one or more output image and data files. For example, the processor 76 may convert image data signals 52, signals 60, 62, 64, 66, 68, 70, 72 and 74, and LIDAR scanner signals 54 into computer-readable output image, data files, and LIDAR 3D point cloud files. The output image, data files, and LIDAR 3D point cloud files may include a plurality of captured image files corresponding to captured oblique and/or nadir images, positional data, and/or LIDAR 3D point clouds corresponding thereto. Additionally, data associated with the one or more object of interest and/or images may be catalogued and saved within one or more database. For example, location information and/or metadata may be catalogued and saved within one or more database.
Output image, data files, and LIDAR 3D point cloud files may then be further provided, displayed and/or used for obtaining measurements of and between objects depicted within the captured images, including measurements of variable distribution. In some embodiments, the image capturing and processing computer system 50 may be used to provide, display and/or obtain measurements of and between objects depicted within the captured images. Alternatively, the image capturing and processing computer system 50 may deliver the output image, data files, and/or LIDAR 3D point clouds to one or more processors, such as, for example, the processors 84 illustrated in FIG. 4 for processors 84 to provide, display and/or obtain measurement.
In some embodiments, delivery of the output image, data files, and/or LIDAR 3D point cloud files may also be by physical removal of the files from the image capturing and processing computer system 50. For example, the output image, data files, and/or LIDAR 3D point cloud files may be stored on a removable storage device and transported to one or more processors 84. In some embodiments, the image capturing and processing computer system 50 may provide at least a portion of the display and/or determine at least a portion of the measurements further described herein.
For simplicity, the following description for measurement of objects of interest as described herein includes reference to residential housing wherein the roof is the object of interest; however, it should be understood by one skilled in the art that the methods described herein may be applied to any structure and/or object of interest. For example, the methods may be applied to any man made and/or natural object (e.g., commercial building structure, tree, driveway, road, bridge, concrete, water, turf and/or the like).
FIGS. 5-9 illustrate exemplary images that are annotated to explain how embodiments of the present disclosure automatically locate objects of interest and automatically generate building outlines reports about the objects of interest that can be used for a variety of purposes including automated building reports, change detection by comparing building outlines generated from imagery of the same building but captured at different times, and steering of mosaic cutlines through portions of the imagery that do not contain a building outline.
Referring to FIG. 5, illustrated therein is a screen shot 100 of an image 102 of a region 104 having multiple buildings 106. In this example, the region 104 includes one or more objects of interest with each building 106 being an object of interest. Roofs 108 of each building are features of the objects of interest. It should be noted that the object(s) of interest and/or feature(s) of the object(s) of interest may be any natural and/or man-made objects within the image. For simplicity of description, buildings 106, as the objects of interest, and roofs 108, as the features of the objects of interest, will be used in the following description. It should be noted, however, that any object of interest within an image may be used including man-made or non-man made objects (e.g., natural objects). Further, objects of interest absent within an image may also be determined using systems and methods described herein. For example, alterations in a footprint of trees may be determined (i.e., loss of trees) using systems and methods described herein.
Generally, the output image file and data files may be used to geo-reference the collected images. Exemplary methods for geo-referencing the imagery may be found in at least U.S. Pat. Nos. 7,424,133 and 5,247,356, which are hereby incorporated by reference in their entirety. Geo-referencing each image results in information that can be used with predetermined algorithms, such as a single-ray projection algorithm, to determine three dimensional geographical coordinates for points in each image. Using the internal and external orientation information also permits the real-world three-dimensional position of pixels to be determined using multiple images and stereophotogrammetry techniques. Referring to FIGS. 1, 5 and 6, a point cloud may be generated as discussed above using geo-referenced images and/or LIDAR data points.
FIG. 6 illustrates a screen shot 110 of a point cloud 112 of the region 104 having the building 106. In some embodiments, the point cloud 112 may be generated by extracting points with determined geographical coordinates using two or more images and geo-referenced data obtained from the image capturing and processing computer system 50. Using a known or calculated distance between capture locations of the one or more image capturing devices 34 and stereo photogrammetry using multiple images, three dimensional points having three dimensional distances from the one or more image capturing devices 34 may be determined. In some embodiments, stereo analysis using standard stereo pair photogrammetry techniques may be automated. In each image, a geographical coordinate, such as (x,y,z) may correspond to a data point in the point cloud 112. As such, each data point has a three-dimensional coordinate.
In some embodiments, the LIDAR 3D data point files may be processed and geo-referenced providing the point cloud 112. For example, the LIDAR 3D data point files may be processed and geo-referenced using software such as Reigl's RiProcess application, distributed by Reigl located in Horn, Austria. In some embodiments, images and georeferenced data, in addition to LIDAR data point files, may be used to generate the point cloud. In some embodiments, images and georeferenced data, in lieu of LIDAR data point files, may be used to generate the point cloud 112.
Referring to FIGS. 1, 6 and 7, in three-dimensional space, natural and man-made structures may be above ground (i.e., ground surface 32 illustrated in FIG. 2). The point cloud 112 may be analyzed to identify data points within the point cloud 112 at particular elevations above ground and/or classify data points within the point cloud at particular elevations. In one example, the point cloud 112 may be analyzed to identify and classify ground structures 114 and/or non-ground structures (i.e., those structures above the ground including man-made objects and natural objects). In some embodiments, areas within the point cloud 112 indicative of the ground structures 114 may be classified as the ground structure 114 and/or removed such that further analysis can be directed to natural and/or man-made structures depicted in the point cloud 112 above the ground structure 114.
Even further, data points at particular elevations may be isolated, classified and/or removed from the point cloud 112. For example, data points within the point cloud 112 not associated with structures at an elevation of interest 116 may be removed such that only data points of structures at the elevation of interest 116 remain within a modified point cloud 118. For example, FIG. 7 illustrates a screen shot 120 of the modified point cloud 118 wherein data points at the elevation of interest 116 (i.e., height h) were identified from the point cloud 112. Even further, data points not associated with structures at the elevation of interest 116 shown in FIG. 6 were classified as not being associated with structures, and then optionally removed from the point cloud 112 to obtain the modified point cloud 118 shown in FIG. 7.
In some embodiments, the elevation of interest 116 may be an average or estimated elevation. For example, FIG. 7 illustrates a screen shot 120 of a modified point cloud 118 showing data points at the elevation of interest 116 of an estimated height h of a roof from ground level shown in FIG. 6. In some embodiments, the elevation of interest 116 may be determined by analysis of the elevation gradient of the data points.
Referring to FIGS. 1, 6 and 7, an object of interest may be detected using the point cloud and the elevation gradient. For example, the object of interest may be the building 106. In one example, the remaining data points at the elevation of interest 116 may be used to classify structures in the modified point cloud 118 as man-made structures 122 or natural structures 124 with each data point having a unique three dimensional point (x,y,z) within the modified point cloud 118. In particular, spatial relationships between each data point may be analyzed to classify such structures as man-made structures 122 (i.e., caused by humankind) or natural structures 124 (i.e., not made or caused by humankind). In one example, the variation of surface normal direction between each point of a group of points and the other points may be analyzed to differentiate man-made structures 122 and natural structures 124. This analysis can be implemented by analyzing a group of points within a local area surrounding particular points within the modified point cloud 118 to determine an orientation of a plane fitting the group of points and assigning this value(s) to the particular point. This process can be repeated for either all of the points in the modified point cloud 118 that have been classified as either a man-made structure or a natural structure or a subset of the points. The local area can be a matrix of pixels that is a subset of the modified point cloud. The size of the matrix can be varied. The matrix has a number of pixels that can be within a range of 0.1% to about 10% of the number of pixels within the modified point cloud 118. For example, the local area can be a 25×25 pixel matrix or the like. Once the orientations have been calculated, then the adjacent orientations are compared to determine a local variation of the surface normal direction. If the data points represent a man-made structure, then the local variation of surface normal direction should be smaller than if the data points represent a natural structure, such as a tree or bush. In some embodiments, data points in which local variation exceeds a pre-determined amount (e.g., 10-20 degrees, and more particularly 15 degrees) may be classified as the natural structure 124. Data points having variation that is below the pre-determined amount may be classified as a man-made structure 122. Each of the data points within the modified point cloud may be analyzed and data points positioned at or above the elevation of interest 116 may be classified as natural structures 124 or man-made structures 122 rather than the ground structure 114.
Referring to FIGS. 7 and 8, each of the data points at the elevation of interest 116 and classified as a part of a man-made structure 122 may be further identified as part of an identifiable structure within an image. For example, groupings of data points within Region A classified as part of man-made structures 122 may be further classified as building 106 a; groupings of data points within Region B classified as part of man-made structures 122 may be further classified as building 106 b; groupings of data points within Region C classified as part of man-made structures 122 may be further classified as buildings 106 c and 106 e; and, groupings of data points within Region D classified as part of man-made structures 122 may be further classified as buildings 106 d and 106 f. Groupings of data points can be classified as particular buildings by detecting building outlines. Once the building outlines are detected, data points within the building outline are classified as part of the building.
Once groupings within the point cloud 112 or the modified point cloud 118 have been identified, then the outer most boundaries are identified to determine a rough outline of the object. The rough outline of the object can be refined by correlating the locations of the data points in the point cloud 112 or the modified point cloud 118 with images from one or more of the databases 94, and then analyzing the raster content within one or more images showing the object to more precisely locate the outline of the object. For example, FIG. 9 illustrates a screen shot 136 of building 106 d. In some embodiments, edges of the building 106 d may be detected within the images using any standard edge detection algorithm. Standard edge detection algorithms may include, but are not limited to, a Laplacian filter, and/or the like.
In some embodiments, spectral analysis of the raster content can be used to identify whether data points within the modified point cloud 118 are likely identifying an object of interest (e.g., buildings 106). For example, the automated roof detection system described in U.S. Pat. No. 9,070,018, which is hereby incorporated by reference in its entirety, may be used to identify the objects of interest within images. Although the automated roof detection system is described in relation to detection of roofs, it should be noted that such system may apply to identification of any object of interest (e.g., man-made or natural structures). Generally, the automated object detection system may use statistical analysis to analyze sections of the images that depict the object of interest, as previously determined within the modified point cloud 118. The automated object detection system may further refine the boundary of the buildings 106 and/or roof 130. Statistical measures may be computed for sections of each image.
FIGS. 10-12 illustrate an exemplary process for determining group descriptor patterns used to identify objects of interest within an image for automated object classification. The rough outline of the object can be refined by correlating the locations of the data points in the point cloud 112 or the modified point cloud 118 with images from one or more of the databases 94, and then analyzing the raster content within one or more images showing the object to more precisely locate the outline of the object.
Referring to FIG. 10, the value of each pixel within the image is part of a vector comprising the values determined for each of the measures calculated for that pixel within an n-dimensional feature space as shown in FIG. 10.
Referring to FIG. 11, a feature transformation may be performed, wherein image measures are statistically analyzed using clustering techniques to group data into separable descriptor groups (i.e., clusters). Image measures may include, for example, localized measures and neighborhood measures. Localized measures describe data points at a particular processing point. Example localized measures may include, but are not limited to, surface fractal analysis, topological contour complexity variation, and/or the like. Neighborhood measures may include ranging related neighborhood measures that describe the organization of structure surrounding a particular processing point. Exemplary neighborhood measures may include, but are not limited to, radial complexity, radial organization variation, and/or the like. It should be noted that the number of descriptor groups may not necessarily be the same as the number of measures.
Descriptor groups (i.e., clusters) may form a new feature vector that includes any error and/or uncertainty (e.g., integrity of the point cloud) of the measurements folded into a space describing the average statistical value of a structure or feature of a structure (e.g., building 106, roof).
Referring to FIG. 12, each descriptor group may be compared to one or more descriptor groups using a statistical model to create a descriptor pattern of inter-relational characteristics. For example, each descriptor group (i.e., cluster) may be compared to other descriptor groups using a statistical model such as an associative neural network. The group descriptor patterns may serve as the basis for identifying and/or classifying objects of interest and/or features of interest within the modified point cloud 118.
An average group descriptor pattern may be generated and cross correlated using an associated neural network, creating a single pattern template (e.g., a homogenous pattern) that may be used to determine which regions of the image are likely building 106 and/or roof 130. The trained neural networks may then use the pattern template to discriminate between the building 106 and/or roof 130 and other surrounding environment. For example, the trained neural networks may use the homogenous pattern template to discriminate between roof 130 and non-roof areas. Based on these indications, second information, e.g., a boundary 138 (e.g., outline) may be determined for the building 106 (or roof 130) as shown in FIG. 9.
Once the boundary 138 is determined, locations within the raster content may be converted to vector form. In one example, the trained neural networks may distinguish between roof 130 and non-roof areas giving an estimated boundary for the roof 130. Such boundary may be defined in raster format. To convert the raster format into vector format, building corners 137 may be identified in the raster format and used to convert each line segment of the boundary 138 into vector form. Once in vector form, the boundary 138 may be stored in one of the databases 94 (shown in FIG. 4) and/or further analyzed. In some embodiments, the raster content may further be geo-referenced to real-world locations. For example, the building 106 may be identified by Latitude/Longitude. In some embodiments, using the boundary 138, the building 106 may be extracted from the image as shown in FIG. 13. Such information may be stored as individual files for each building (e.g., .shp file) in the database 94, or may be stored as multiple files within the database 94.
Referring to FIGS. 7, 8 and 14, once an object of interest is identified within the modified point cloud 118, features of interest (e.g., roofs 130 a-130 d, roof elements) within an image may be identified on the objects of interest (e.g., building 106 a-106 d) via the modified point cloud 118, the point cloud 112, or the original image(s). In some embodiments, identification of features of interest (e.g., roofs 130) may be through the automated object detection system as described herein and in U.S. Pat. No. 9,070,018. In some embodiments, identification of features of interest may be identified via the original image subsequent to the automated classification of the object of interest via methods and systems as described, for example, U.S. Pat. Nos. 8,977,520 and 7,424,133, which are both hereby incorporated by reference in their entirety.
In some embodiments, elements within the feature of interest (e.g., roof 130) may be further classified. For example, FIG. 14 illustrates a screen shot 132 of roofs 130 a-130 d. Line segments 134 forming the roof may be further classified using predefined roof elements including, but not limited to, a rake, a hip, a valley, an eave, a ridge, and/or the like.
In some embodiments, one or more features of interest may be further analyzed and/or measurements determined. For example, features of the roof 130, dimensions of the roof 130, and/or roof pitch may be determined using data points within the point cloud 112, the modified point cloud 118 and/or the one or more images. For example, roof pitch may be calculated using data points of the point cloud 112 as each data point coordinates with a geographical reference (x,y,z). Using data points of the point cloud 112 located at an elevation at the top of a peak and data points located at an elevation at the bottom of a peak, roof pitch may be calculated. Roof feature calculations may include, but are not limited to, dimensions of eaves, edges, ridge, angles, and/or the like. Characteristics of the roof 130 such as features of the roof 130, dimensions of the roof 13, roof pitch, condition and/or composition of the roof 130 may be determined by analyzing the one or more images. For example, if the one more images were obtained from a manned aerial vehicle and included multi-spectral or hyper spectral information, then condition or composition of the roof 130 may also be determined. The capture platform may also be a drone, and in this instance the resolution of the one or more images may be sufficient to determine the condition (e.g., moss, excessive moisture, hail damage, etc.) or composition (tile, composition, wood, slate, etc.) of the roof 130.
In one example, boundaries may be used to identify changes to man-made and/or natural objects within an image. For example, boundaries 138 (as shown in FIG. 9) may be used to identify changes to buildings 106 and/or roof 130 over time. In some embodiments, boundaries and/or extraction techniques may be used in forming one or more mosaic models of the object of interest. For example, boundaries 138 and/or extraction techniques as described herein may be used in forming a mosaic model of the building 106. Images illustrating each side of the building 106 may be extracted using the methods as described herein. Such images may be composed into a three-dimensional mosaic model illustrating the building 106. In another example, boundaries and/or analysis of the features within the boundaries may be analyzed and/or described within a report.
Referring to FIGS. 1, 4 and 15, a customer and/or contractor may receive a report regarding evaluation of object(s) of interest and/or feature(s) of interest. For example, the customer and/or contractor may receive a report regarding evaluation of the building 106 and/or roof 130. FIG. 15 illustrates an exemplary embodiment of a roof report 140. The program logic 96 may provide for one or more of the processors 84 interfacing with the image capturing and processing computer system 50 over the network 86 to provide one or more roof reports 140.
Generally, the roof report 140 may include, but is not limited to, one or more data sets 142 regarding roof pitch, total area, eave length, hip ridge length, valley length, number of box vents, and/or the like. Additionally, the roof report 140 may include one or more images 144 of the building 106 and/or roof 130. Such images 144 may be automatically provided to the roof report 140 via extraction of the building 106 and/or roof 130 as described herein. Additionally, the roof report 140 may include a customer information data set 146 (e.g., customer name and contact information), estimated area detail, contractor data set 148 (e.g., contractor name and contract information), and/or the like.
In some embodiments, determination, analysis and measurements of data associated with the object of interest and/or features of interest may be catalogued and stored in one or more database for retrieval. Data cataloged and stored associated with the object of interest and/or feature of interest may include, but is not limited to, location data (e.g., object of interest, each point of the object of interest), date and/or time of image creation, algorithms used, measurements, metadata, footprint, and/or the like. For example, boundaries associated with the object of interest and/or feature of interest may be spatially catalogued using location (e.g., coordinate data, address).
From the above description, it is clear that the inventive concepts disclosed and claimed herein are well adapted to carry out the objects and to attain the advantages mentioned herein, as well as those inherent in the invention. While exemplary embodiments of the inventive concepts have been described for purposes of this disclosure, it will be understood that numerous changes may be made which will readily suggest themselves to those skilled in the art and which are accomplished within the spirit of the inventive concepts disclosed and claimed herein.

Claims (26)

What is claimed is:
1. One or more non-transitory computer readable medium storing a set of computer executable instructions for running on one or more computer systems that when executed cause the one or more computer systems to:
classify at least one feature within an image as a man-made structure, wherein the man-made structure is a building having a roof, wherein classification of the feature as a man-made structure includes:
analyzing data points within a point cloud indicative of a man-made structure, a natural structure, and a ground structure to automatically classify certain of the data points within the point cloud representing the man-made structure as the man-made structure;
determining first information about the man-made structure by analyzing the data points classified as the man-made structure;
correlating the data points within the point cloud classified as the man-made structure with one or more images;
spectrally analyze image raster content of at least one section of the one or more images correlated with the data points within the point cloud that are classified as the man-made structure to generate second information in which the second information is an update of the first information; and
determine an estimated roof pitch by analyzing the one or more images.
2. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 1, wherein the first information is a boundary of the building.
3. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 1, wherein the second information is a boundary of the building.
4. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 1, further comprising generating a roof report including one image showing the roof and the estimated roof pitch.
5. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 1, wherein classification of man-made structures includes determination of shape of a grouping of data points.
6. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 5, wherein determination of shape of the grouping of data points includes analyzing the grouping of data points to determine a variation of surface normal direction between data points.
7. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer system of claim 1, further comprising spatially cataloguing location of data points associated with the image.
8. One or more non-transitory computer readable medium storing a set of computer executable instructions for running on one or more computer systems that when executed cause the one or more computer systems to:
classify at least one feature within an image as a man-made structure, wherein classification of the feature as a man-made structure includes:
analyzing data points within a point cloud indicative of a man-made structure, a natural structure, and a ground structure to automatically classify certain of the data points within the point cloud representing the man-made structure as the man-made structure, wherein the data points having a Z value above a first elevation are classified as at least one of the man-made structure and the natural structure;
determining first information about the man-made structure by analyzing the data points classified as the man-made structure, wherein the first information includes an outline of the man-made structure;
correlating the data points within the point cloud classified as the man-made structure with one or more images; and
spectrally analyze image raster content of at least one section of the one or more images correlated with the data points within the point cloud that are classified as the man-made structure to generate second information in which the second information is an update of the first information.
9. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 8, wherein the outline is a first outline, and wherein the second information is a second outline of the man-made structure based upon the first outline.
10. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer system of claim 8, wherein the man-made structure is a building having a roof.
11. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 10, further comprising determining an estimated roof pitch by analyzing data points within the point cloud.
12. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 10, further comprising determining an estimated roof pitch by analyzing the one or more images.
13. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer system of claim 10, further comprising generating a roof report including one image showing the roof and an estimated roof pitch.
14. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 8, wherein to classify at least one feature within the image as a man-made structure includes determination of shape of a grouping of data points.
15. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 14, wherein determination of shape of the grouping of data points includes analyzing the grouping of data points to determine a variation of surface normal direction between data points.
16. One or more non-transitory computer readable medium storing a set of computer executable instructions for running on one or more computer system that when executed cause the one or more computer systems to:
classify at least one feature within an image as a man-made structure, wherein classification of the feature as a man-made structure includes:
analyzing data points within a point cloud indicative of a man-made structure, a natural structure, and a ground structure to automatically classify certain of the data points within the point cloud representing the man-made structure as the man-made structure;
determining first information about the man-made structure by analyzing the data points classified as the man-made structure;
correlating the data points within the point cloud classified as the man-made structure with one or more images; and
spectrally analyze image raster content of at least one section of the one or more images correlated with the data points within the point cloud that are classified as the man-made structure, including grouping of data points based on one or more similar characteristics, to generate second information in which the second information is an update of the first information.
17. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer system of claim 16, wherein similar characteristics include similar spectral signatures.
18. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer system of claim 16, wherein similar characteristics include similar thermal signatures.
19. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer system of claim 16, wherein the man-made structure is a building having a roof.
20. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 19, further comprising determining an estimated roof pitch by analyzing data points within the point cloud.
21. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 19, further comprising determining an estimated roof pitch by analyzing the one or more images.
22. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer system of claim 19, further comprising generating a roof report including one image showing the roof and an estimated roof pitch.
23. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 16, wherein to classify at least one feature within the image as a man-made structure includes determination of shape of a grouping of data points.
24. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 23, wherein determination of shape of the grouping of data points includes analyzing the grouping of data points to determine a variation of surface normal direction between data points.
25. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 16, wherein the first information is a boundary of the man-made structure.
26. The one or more non-transitory computer readable medium storing the set of computer executable instructions for running on one or more computer systems of claim 16, wherein the second information is a boundary of the man-made structure.
US15/428,860 2016-02-15 2017-02-09 Automated system and methodology for feature extraction Active 2037-03-11 US10402676B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/428,860 US10402676B2 (en) 2016-02-15 2017-02-09 Automated system and methodology for feature extraction
US16/548,219 US10796189B2 (en) 2016-02-15 2019-08-22 Automated system and methodology for feature extraction
US17/063,255 US11417081B2 (en) 2016-02-15 2020-10-05 Automated system and methodology for feature extraction

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201662295336P 2016-02-15 2016-02-15
US201662411284P 2016-10-21 2016-10-21
US15/428,860 US10402676B2 (en) 2016-02-15 2017-02-09 Automated system and methodology for feature extraction

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/548,219 Division US10796189B2 (en) 2016-02-15 2019-08-22 Automated system and methodology for feature extraction

Publications (2)

Publication Number Publication Date
US20170236024A1 US20170236024A1 (en) 2017-08-17
US10402676B2 true US10402676B2 (en) 2019-09-03

Family

ID=59559679

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/428,860 Active 2037-03-11 US10402676B2 (en) 2016-02-15 2017-02-09 Automated system and methodology for feature extraction
US16/548,219 Active US10796189B2 (en) 2016-02-15 2019-08-22 Automated system and methodology for feature extraction
US17/063,255 Active 2037-02-12 US11417081B2 (en) 2016-02-15 2020-10-05 Automated system and methodology for feature extraction

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/548,219 Active US10796189B2 (en) 2016-02-15 2019-08-22 Automated system and methodology for feature extraction
US17/063,255 Active 2037-02-12 US11417081B2 (en) 2016-02-15 2020-10-05 Automated system and methodology for feature extraction

Country Status (5)

Country Link
US (3) US10402676B2 (en)
EP (1) EP3403050A4 (en)
AU (2) AU2017221222B2 (en)
CA (1) CA3014353A1 (en)
WO (1) WO2017142788A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170177748A1 (en) * 2015-12-16 2017-06-22 Wal-Mart Stores, Inc. Residential Upgrade Design Tool
US10521694B2 (en) * 2016-09-09 2019-12-31 The Chinese University Of Hong Kong 3D building extraction apparatus, method and system
US10529029B2 (en) 2016-09-23 2020-01-07 Aon Benfield Inc. Platform, systems, and methods for identifying property characteristics and property feature maintenance through aerial imagery analysis
WO2020061518A1 (en) 2018-09-21 2020-03-26 Eagle View Technologies, Inc. Method and system for determining solar access of a structure
US10650285B1 (en) 2016-09-23 2020-05-12 Aon Benfield Inc. Platform, systems, and methods for identifying property characteristics and property feature conditions through aerial imagery analysis
US10783648B2 (en) * 2018-03-05 2020-09-22 Hanwha Techwin Co., Ltd. Apparatus and method for processing image
US11151378B2 (en) 2015-08-31 2021-10-19 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
US20210325182A1 (en) * 2019-03-27 2021-10-21 Chengdu Rainpoo Technology Co., Ltd. Aerial survey method and apparatus capable of eliminating redundant aerial photos
US11222426B2 (en) 2020-06-02 2022-01-11 Cape Analytics, Inc. Method for property feature segmentation
US11232150B2 (en) 2020-04-10 2022-01-25 Cape Analytics, Inc. System and method for geocoding
US11308714B1 (en) * 2018-08-23 2022-04-19 Athenium Llc Artificial intelligence system for identifying and assessing attributes of a property shown in aerial imagery
US11312379B2 (en) * 2019-02-15 2022-04-26 Rockwell Collins, Inc. Occupancy map synchronization in multi-vehicle networks
US11367265B2 (en) 2020-10-15 2022-06-21 Cape Analytics, Inc. Method and system for automated debris detection
US20230059652A1 (en) * 2021-08-19 2023-02-23 Forest Carbon Works, PBC Systems and methods for forest surveying
US11861843B2 (en) 2022-01-19 2024-01-02 Cape Analytics, Inc. System and method for object analysis
US11875413B2 (en) 2021-07-06 2024-01-16 Cape Analytics, Inc. System and method for property condition analysis

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10032310B2 (en) 2016-08-22 2018-07-24 Pointivo, Inc. Methods and systems for wireframes of a structure or element of interest and wireframes generated therefrom
EP3513382B1 (en) * 2016-09-14 2020-09-02 Fraunhofer Gesellschaft zur Förderung der angewandten Forschung E.V. Pattern detection
US11043026B1 (en) 2017-01-28 2021-06-22 Pointivo, Inc. Systems and methods for processing 2D/3D data for structures of interest in a scene and wireframes generated therefrom
KR102414676B1 (en) * 2017-03-07 2022-06-29 삼성전자주식회사 Electronic apparatus and operating method for generating a map data
US10423831B2 (en) * 2017-09-15 2019-09-24 Honeywell International Inc. Unmanned aerial vehicle based expansion joint failure detection system
EP3460729A1 (en) 2017-09-26 2019-03-27 Ricoh Company, Ltd. Information processing apparatus, system of assessing structural object, method of assessing structural object system of assessing structural object, and carrier means
US11208208B2 (en) * 2017-10-18 2021-12-28 Geocue Group, Inc. Systems and methods for synchronizing events in shifted temporal reference systems
CN107832805B (en) * 2017-11-29 2021-04-16 常州大学 Technology for eliminating influence of spatial position error on remote sensing soft classification precision evaluation based on probability position model
CN109931923B (en) * 2017-12-15 2023-07-07 阿里巴巴集团控股有限公司 Navigation guidance diagram generation method and device
US11514644B2 (en) 2018-01-19 2022-11-29 Enphase Energy, Inc. Automated roof surface measurement from combined aerial LiDAR data and imagery
US10937165B2 (en) * 2018-03-21 2021-03-02 International Business Machines Corporation Comparison of relevant portions of images
US10930001B2 (en) * 2018-05-29 2021-02-23 Zebra Technologies Corporation Data capture system and method for object dimensioning
US11106911B1 (en) 2018-06-13 2021-08-31 Pointivo, Inc. Image acquisition planning systems and methods used to generate information for structures of interest
WO2020056041A1 (en) * 2018-09-11 2020-03-19 Pointivo, Inc. Improvements in data acquistion, processing, and output generation for use in analysis of one or a collection of physical assets of interest
EP3881161A1 (en) * 2018-11-14 2021-09-22 Cape Analytics, Inc. Systems, methods, and computer readable media for predictive analytics and change detection from remotely sensed imagery
US11200700B2 (en) * 2019-01-10 2021-12-14 Mediatek Singapore Pte. Ltd. Methods and apparatus for signaling viewports and regions of interest for point cloud multimedia data
CN109782786B (en) * 2019-02-12 2021-09-28 上海戴世智能科技有限公司 Positioning method based on image processing and unmanned aerial vehicle
US11106937B2 (en) * 2019-06-07 2021-08-31 Leica Geosystems Ag Method for creating point cloud representations
US11238282B2 (en) 2019-06-07 2022-02-01 Pictometry International Corp. Systems and methods for automated detection of changes in extent of structures using imagery
CN110490061B (en) * 2019-07-11 2021-10-22 武汉大学 Uncertainty modeling and measuring method for remote sensing image characteristics
CN110555826B (en) * 2019-08-04 2022-04-15 大连理工大学 Three-dimensional point cloud feature extraction method based on local outlier factors
CN110717496B (en) * 2019-08-29 2021-06-08 浙江工业大学 Complex scene tree detection method based on neural network
JP7313998B2 (en) * 2019-09-18 2023-07-25 株式会社トプコン Survey data processing device, survey data processing method and program for survey data processing
US11776104B2 (en) 2019-09-20 2023-10-03 Pictometry International Corp. Roof condition assessment using machine learning
US11023730B1 (en) * 2020-01-02 2021-06-01 International Business Machines Corporation Fine-grained visual recognition in mobile augmented reality
US20210248776A1 (en) * 2020-02-07 2021-08-12 Omnitracs, Llc Image processing techniques for identifying location of interest
US11494977B2 (en) * 2020-02-28 2022-11-08 Maxar Intelligence Inc. Automated process for building material detection in remotely sensed imagery
CN111460051B (en) * 2020-04-02 2020-11-20 哈尔滨工程大学 Data association method based on tree structure and layer-by-layer node deletion
WO2021243205A1 (en) * 2020-05-29 2021-12-02 Motion2Ai Method and system for geo-semantic recognition of structural object
DE102020117059A1 (en) 2020-06-29 2021-12-30 Bayernwerk Ag System for processing georeferenced 3D point clouds and method for generating georeferenced 3D point clouds
CN112096566A (en) * 2020-08-27 2020-12-18 上海扩博智能技术有限公司 Method, system, equipment and medium for acquiring shutdown state parameters of fan
TWI744001B (en) * 2020-09-22 2021-10-21 神通資訊科技股份有限公司 Ipized device for uav flight controller
EP4229356A1 (en) * 2020-10-15 2023-08-23 Scout Space Inc. Passive hyperspectral visual and infrared sensor package for mixed stereoscopic imaging and heat mapping
CN112861817A (en) * 2021-03-31 2021-05-28 国网上海市电力公司 Instrument noise image processing method
CN114337881B (en) * 2021-11-26 2023-02-03 西安电子科技大学 Wireless spectrum intelligent sensing method based on multi-unmanned aerial vehicle distribution and LMS
CN113963262B (en) * 2021-12-20 2022-08-23 中国地质大学(武汉) Mining area land coverage classification method, equipment, device and storage medium

Citations (171)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2273876A (en) 1940-02-12 1942-02-24 Frederick W Lutz Apparatus for indicating tilt of cameras
US3153784A (en) 1959-12-24 1964-10-20 Us Industries Inc Photo radar ground contour mapping system
US3594556A (en) 1969-01-08 1971-07-20 Us Navy Optical sight with electronic image stabilization
US3614410A (en) 1969-06-12 1971-10-19 Knight V Bailey Image rectifier
US3621326A (en) 1968-09-30 1971-11-16 Itek Corp Transformation system
US3661061A (en) 1969-05-05 1972-05-09 Atomic Energy Commission Picture position finder
US3716669A (en) 1971-05-14 1973-02-13 Japan Eng Dev Co Mapping rectifier for generating polarstereographic maps from satellite scan signals
US3725563A (en) 1971-12-23 1973-04-03 Singer Co Method of perspective transformation in scanned raster visual display
US3864513A (en) 1972-09-11 1975-02-04 Grumman Aerospace Corp Computerized polarimetric terrain mapping system
US3866602A (en) 1973-05-29 1975-02-18 Olympus Optical Co Endoscope camera with orientation indicator
US3877799A (en) 1974-02-06 1975-04-15 United Kingdom Government Method of recording the first frame in a time index system
US4015080A (en) 1973-04-30 1977-03-29 Elliott Brothers (London) Limited Display devices
US4044879A (en) 1975-03-07 1977-08-30 Siemens Aktiengesellschaft Arrangement for recording characters using mosaic recording mechanisms
US4184711A (en) 1977-10-14 1980-01-22 Yasuo Wakimoto Folding canvas chair
US4240108A (en) 1977-10-03 1980-12-16 Grumman Aerospace Corporation Vehicle controlled raster display system
US4281354A (en) 1978-05-19 1981-07-28 Raffaele Conte Apparatus for magnetic recording of casual events relating to movable means
US4344683A (en) 1979-09-29 1982-08-17 Agfa-Gevaert Aktiengesellschaft Quality control method and apparatus for photographic pictures
US4360876A (en) 1979-07-06 1982-11-23 Thomson-Csf Cartographic indicator system
US4382678A (en) 1981-06-29 1983-05-10 The United States Of America As Represented By The Secretary Of The Army Measuring of feature for photo interpretation
US4387056A (en) 1981-04-16 1983-06-07 E. I. Du Pont De Nemours And Company Process for separating zero-valent nickel species from divalent nickel species
US4396942A (en) 1979-04-19 1983-08-02 Jackson Gates Video surveys
US4463380A (en) 1981-09-25 1984-07-31 Vought Corporation Image processing system
US4489322A (en) 1983-01-27 1984-12-18 The United States Of America As Represented By The Secretary Of The Air Force Radar calibration using direct measurement equipment and oblique photometry
US4490742A (en) 1982-04-23 1984-12-25 Vcs, Incorporated Encoding apparatus for a closed circuit television system
US4491399A (en) 1982-09-27 1985-01-01 Coherent Communications, Inc. Method and apparatus for recording a digital signal on motion picture film
US4495500A (en) 1982-01-26 1985-01-22 Sri International Topographic data gathering method
US4527055A (en) 1982-11-15 1985-07-02 Honeywell Inc. Apparatus for selectively viewing either of two scenes of interest
US4543603A (en) 1982-11-30 1985-09-24 Societe Nationale Industrielle Et Aerospatiale Reconnaissance system comprising an air-borne vehicle rotating about its longitudinal axis
US4586138A (en) 1982-07-29 1986-04-29 The United States Of America As Represented By The United States Department Of Energy Route profile analysis system and method
US4635136A (en) 1984-02-06 1987-01-06 Rochester Institute Of Technology Method and apparatus for storing a massive inventory of labeled images
US4653136A (en) 1985-06-21 1987-03-31 Denison James W Wiper for rear view mirror
US4653316A (en) 1986-03-14 1987-03-31 Kabushiki Kaisha Komatsu Seisakusho Apparatus mounted on vehicles for detecting road surface conditions
US4673988A (en) 1985-04-22 1987-06-16 E.I. Du Pont De Nemours And Company Electronic mosaic imaging process
US4686474A (en) 1984-04-05 1987-08-11 Deseret Research, Inc. Survey system for collection and real time processing of geophysical data
US4688092A (en) 1986-05-06 1987-08-18 Ford Aerospace & Communications Corporation Satellite camera image navigation
US4689748A (en) 1979-10-09 1987-08-25 Messerschmitt-Bolkow-Blohm Gesellschaft Mit Beschrankter Haftung Device for aircraft and spacecraft for producing a digital terrain representation
US4707698A (en) 1976-03-04 1987-11-17 Constant James N Coordinate measurement and radar device using image scanner
US4758850A (en) 1985-08-01 1988-07-19 British Aerospace Public Limited Company Identification of ground targets in airborne surveillance radar returns
US4805033A (en) 1987-02-18 1989-02-14 Olympus Optical Co., Ltd. Method of forming oblique dot pattern
US4807024A (en) 1987-06-08 1989-02-21 The University Of South Carolina Three-dimensional display methods and apparatus
US4814711A (en) 1984-04-05 1989-03-21 Deseret Research, Inc. Survey system and method for real time collection and processing of geophysicals data using signals from a global positioning satellite network
US4814896A (en) 1987-03-06 1989-03-21 Heitzman Edward F Real time video data acquistion systems
US4843463A (en) 1988-05-23 1989-06-27 Michetti Joseph A Land vehicle mounted audio-visual trip recorder
US4899296A (en) 1987-11-13 1990-02-06 Khattak Anwar S Pavement distress survey system
US4906198A (en) 1988-12-12 1990-03-06 International Business Machines Corporation Circuit board assembly and contact pin for use therein
US4953227A (en) 1986-01-31 1990-08-28 Canon Kabushiki Kaisha Image mosaic-processing method and apparatus
US4956872A (en) 1986-10-31 1990-09-11 Canon Kabushiki Kaisha Image processing apparatus capable of random mosaic and/or oil-painting-like processing
US5034812A (en) 1988-11-14 1991-07-23 Smiths Industries Public Limited Company Image processing utilizing an object data store to determine information about a viewed object
US5086314A (en) 1990-05-21 1992-02-04 Nikon Corporation Exposure control apparatus for camera
US5121222A (en) 1989-06-14 1992-06-09 Toshiaki Endoh Method and apparatus for producing binary picture with detection and retention of plural binary picture blocks having a thin line pattern including an oblique line
US5138444A (en) 1991-09-05 1992-08-11 Nec Corporation Image pickup system capable of producing correct image signals of an object zone
US5155597A (en) 1990-11-28 1992-10-13 Recon/Optical, Inc. Electro-optical imaging array with motion compensation
US5164825A (en) 1987-03-30 1992-11-17 Canon Kabushiki Kaisha Image processing method and apparatus for mosaic or similar processing therefor
US5166789A (en) 1989-08-25 1992-11-24 Space Island Products & Services, Inc. Geographical surveying using cameras in combination with flight computers to obtain images with overlaid geographical coordinates
US5191174A (en) 1990-08-01 1993-03-02 International Business Machines Corporation High density circuit board and method of making same
US5200793A (en) 1990-10-24 1993-04-06 Kaman Aerospace Corporation Range finding array camera
US5210586A (en) 1990-06-27 1993-05-11 Siemens Aktiengesellschaft Arrangement for recognizing obstacles for pilots of low-flying aircraft
US5231435A (en) 1991-07-12 1993-07-27 Blakely Bruce W Aerial camera mounting apparatus
US5247356A (en) 1992-02-14 1993-09-21 Ciampa John A Method and apparatus for mapping and measuring land
US5251037A (en) 1992-02-18 1993-10-05 Hughes Training, Inc. Method and apparatus for generating high resolution CCD camera images
US5265173A (en) 1991-03-20 1993-11-23 Hughes Aircraft Company Rectilinear object image matcher
US5267042A (en) 1991-01-11 1993-11-30 Pioneer Electronic Corporation Image pickup device for automatically recording the location where an image is recorded
US5270756A (en) 1992-02-18 1993-12-14 Hughes Training, Inc. Method and apparatus for generating high resolution vidicon camera images
US5296884A (en) 1990-02-23 1994-03-22 Minolta Camera Kabushiki Kaisha Camera having a data recording function
US5335072A (en) 1990-05-30 1994-08-02 Minolta Camera Kabushiki Kaisha Photographic system capable of storing information on photographed image data
US5342999A (en) 1992-12-21 1994-08-30 Motorola, Inc. Apparatus for adapting semiconductor die pads and method therefor
US5345086A (en) 1962-11-28 1994-09-06 Eaton Corporation Automatic map compilation system
US5353055A (en) 1991-04-16 1994-10-04 Nec Corporation Image pickup system with an image pickup device for control
US5369443A (en) 1991-04-12 1994-11-29 Abekas Video Systems, Inc. Digital video effects generator
US5402170A (en) 1991-12-11 1995-03-28 Eastman Kodak Company Hand-manipulated electronic camera tethered to a personal computer
US5414462A (en) 1993-02-11 1995-05-09 Veatch; John W. Method and apparatus for generating a comprehensive survey map
US5467271A (en) 1993-12-17 1995-11-14 Trw, Inc. Mapping and analysis system for precision farming applications
US5481479A (en) 1992-12-10 1996-01-02 Loral Fairchild Corp. Nonlinear scanning to optimize sector scan electro-optic reconnaissance system performance
US5486948A (en) 1989-03-24 1996-01-23 Canon Hanbai Kabushiki Kaisha Stereo image forming apparatus having a light deflection member in each optical path
US5506644A (en) 1992-08-18 1996-04-09 Olympus Optical Co., Ltd. Camera
US5508736A (en) 1993-05-14 1996-04-16 Cooper; Roger D. Video signal processing apparatus for producing a composite signal for simultaneous display of data and video information
US5555018A (en) 1991-04-25 1996-09-10 Von Braun; Heiko S. Large-scale mapping of parameters of multi-dimensional structures in natural environments
US5604534A (en) 1995-05-24 1997-02-18 Omni Solutions International, Ltd. Direct digital airborne panoramic camera system and method
US5617224A (en) 1989-05-08 1997-04-01 Canon Kabushiki Kaisha Imae processing apparatus having mosaic processing feature that decreases image resolution without changing image size or the number of pixels
US5633946A (en) 1994-05-19 1997-05-27 Geospan Corporation Method and apparatus for collecting and processing visual and spatial position information from a moving platform
US5668593A (en) 1995-06-07 1997-09-16 Recon/Optical, Inc. Method and camera system for step frame reconnaissance with motion compensation
US5677515A (en) 1991-10-18 1997-10-14 Trw Inc. Shielded multilayer printed wiring board, high frequency, high isolation
US5798786A (en) 1996-05-07 1998-08-25 Recon/Optical, Inc. Electro-optical imaging detector array for a moving vehicle which includes two axis image motion compensation and transfers pixels in row directions and column directions
US5835133A (en) 1996-01-23 1998-11-10 Silicon Graphics, Inc. Optical system for single camera stereo video
US5841574A (en) 1996-06-28 1998-11-24 Recon/Optical, Inc. Multi-special decentered catadioptric optical system
US5844602A (en) 1996-05-07 1998-12-01 Recon/Optical, Inc. Electro-optical imaging array and camera system with pitch rate image motion compensation which can be used in an airplane in a dive bomb maneuver
US5852753A (en) 1997-11-10 1998-12-22 Lo; Allen Kwok Wah Dual-lens camera with shutters for taking dual or single images
US5894323A (en) 1996-03-22 1999-04-13 Tasc, Inc, Airborne imaging system using global positioning system (GPS) and inertial measurement unit (IMU) data
WO1999018732A1 (en) 1997-10-06 1999-04-15 Ciampa John A Digital-image mapping
US5899945A (en) 1995-04-17 1999-05-04 Space Systems/Loral, Inc. Attitude control and navigation system for high resolution imaging
US5963664A (en) 1995-06-22 1999-10-05 Sarnoff Corporation Method and system for image combination using a parallax-based technique
US6037945A (en) 1997-12-16 2000-03-14 Xactware, Inc. Graphical method for modeling and estimating construction costs
EP1010966A1 (en) 1998-12-15 2000-06-21 Aerowest GmbH Method for generating a three dimensional object description
US6094215A (en) 1998-01-06 2000-07-25 Intel Corporation Method of determining relative camera orientation position to create 3-D visual images
US6097854A (en) 1997-08-01 2000-08-01 Microsoft Corporation Image mosaic construction system and apparatus with patch-based alignment, global block adjustment and pair-wise motion-based local warping
US6108032A (en) 1996-11-05 2000-08-22 Lockheed Martin Fairchild Systems System and method for image motion compensation of a CCD image sensor
WO2000053090A1 (en) 1999-03-08 2000-09-14 Tci Incorporated Electric mammograph
US6130705A (en) 1998-07-10 2000-10-10 Recon/Optical, Inc. Autonomous electro-optical framing camera system with constant ground resolution, unmanned airborne vehicle therefor, and methods of use
US6157747A (en) 1997-08-01 2000-12-05 Microsoft Corporation 3-dimensional image rotation method and apparatus for producing image mosaics
US6222583B1 (en) 1997-03-27 2001-04-24 Nippon Telegraph And Telephone Corporation Device and system for labeling sight images
US6236886B1 (en) 1996-12-11 2001-05-22 Technology Commercialization International Method for producing a tomographic image of the body and electric impedance tomograph
US6256057B1 (en) 1996-11-05 2001-07-03 Lockhead Martin Corporation Electro-optical reconnaissance system with forward motion compensation
US20020041328A1 (en) 2000-03-29 2002-04-11 Astrovision International, Inc. Direct broadcast imaging satellite system apparatus and method for providing real-time, continuous monitoring of earth from geostationary earth orbit and related services
US20020041717A1 (en) 2000-08-30 2002-04-11 Ricoh Company, Ltd. Image processing method and apparatus and computer-readable storage medium using improved distortion correction
US6421610B1 (en) 2000-09-15 2002-07-16 Ernest A. Carroll Method of preparing and disseminating digitized geospatial data
US6434280B1 (en) 1997-11-10 2002-08-13 Gentech Corporation System and method for generating super-resolution-enhanced mosaic images
US20020114536A1 (en) 1998-09-25 2002-08-22 Yalin Xiong Aligning rectilinear images in 3D through projective registration and calibration
US20030014224A1 (en) 2001-07-06 2003-01-16 Yanlin Guo Method and apparatus for automatically generating a site model
US20030043824A1 (en) 2001-08-31 2003-03-06 Remboski Donald J. Vehicle active network and device
US20030088362A1 (en) 2000-08-16 2003-05-08 Imagelinks, Inc. 3-dimensional interactive image modeling system
US6597818B2 (en) 1997-05-09 2003-07-22 Sarnoff Corporation Method and apparatus for performing geo-spatial registration of imagery
US20030164962A1 (en) 2002-03-01 2003-09-04 Nims Jerry C. Multiple angle display produced from remote optical sensing devices
US6639596B1 (en) 1999-09-20 2003-10-28 Microsoft Corporation Stereo reconstruction from multiperspective panoramas
JP2003317089A (en) 2002-04-24 2003-11-07 Dainippon Printing Co Ltd Method and system for image correction
US20030214585A1 (en) 2002-01-09 2003-11-20 Bakewell Charles Adams Mobile enforcement platform with aimable violation identification and documentation system for multiple traffic violation types across all lanes in moving traffic, generating composite display images and data to support citation generation, homeland security, and monitoring
US6711475B2 (en) 2000-03-16 2004-03-23 The Johns Hopkins University Light detection and ranging (LIDAR) mapping system
US6731329B1 (en) 1999-05-14 2004-05-04 Zsp Geodaetische Systeme Gmbh Method and an arrangement for determining the spatial coordinates of at least one object point
CA2505566A1 (en) 2002-11-08 2004-05-27 Stephen Schultz Oblique geolocation and measurement system
US6747686B1 (en) 2001-10-05 2004-06-08 Recon/Optical, Inc. High aspect stereoscopic mode camera and method
US20040167709A1 (en) 2002-09-20 2004-08-26 M7 Visual Intelligence, Lp Vehicle based data collection and processing system
US6810383B1 (en) 2000-01-21 2004-10-26 Xactware, Inc. Automated task management and evaluation
US6826539B2 (en) 1999-12-31 2004-11-30 Xactware, Inc. Virtual structure data repository and directory
US6829584B2 (en) 1999-12-31 2004-12-07 Xactware, Inc. Virtual home data repository and directory
US6834128B1 (en) 2000-06-16 2004-12-21 Hewlett-Packard Development Company, L.P. Image mosaicing system and method adapted to mass-market hand-held digital cameras
US6876763B2 (en) 2000-02-03 2005-04-05 Alst Technical Excellence Center Image resolution improvement using a color mosaic sensor
US20050073241A1 (en) 1999-06-21 2005-04-07 Semiconductor Energy Laboratory Co., Ltd. EL display device, driving method thereof, and electronic equipment provided with the display device
US20050088251A1 (en) 2003-10-23 2005-04-28 Nihon Dempa Kogyo Co., Ltd. Crystal oscillator
US20050169521A1 (en) 2004-01-31 2005-08-04 Yacov Hel-Or Processing of mosaic digital images
WO2005088251A1 (en) 2004-02-27 2005-09-22 Intergraph Software Technologies Company Forming a single image from overlapping images
US20060028550A1 (en) 2004-08-06 2006-02-09 Palmer Robert G Jr Surveillance system and method
US7009638B2 (en) 2001-05-04 2006-03-07 Vexcel Imaging Gmbh Self-calibrating, digital, large format camera with single or multiple detector arrays and single or multiple optical systems
US7018050B2 (en) 2003-09-08 2006-03-28 Hewlett-Packard Development Company, L.P. System and method for correcting luminance non-uniformity of obliquely projected images
US20060092043A1 (en) 2004-11-03 2006-05-04 Lagassey Paul J Advanced automobile accident detection, data recordation and reporting system
US7046841B1 (en) 2003-08-29 2006-05-16 Aerotec, Llc Method and system for direct classification from three dimensional digital imaging
US7046401B2 (en) 2001-06-01 2006-05-16 Hewlett-Packard Development Company, L.P. Camera-based document scanning system using multiple-pass mosaicking
US7061650B2 (en) 1999-05-25 2006-06-13 Silverbrook Research Pty Ltd Method and apparatus for bayer mosaic image conversion
US7065260B2 (en) 2000-10-27 2006-06-20 Microsoft Corporation Rebinning methods and arrangements for use in compressing image-based rendering (IBR) data
EP1696204A2 (en) 2002-11-08 2006-08-30 Pictometry International Corp. Method and apparatus for capturing, geolocating and measuring oblique images
US20060238383A1 (en) 2005-04-21 2006-10-26 Microsoft Corporation Virtual earth rooftop overlay and bounding
US7133551B2 (en) 2002-02-07 2006-11-07 National Central University Semi-automatic reconstruction method of 3-D building models using building outline segments
US20060250515A1 (en) 1998-09-16 2006-11-09 Olympus Optical Co., Ltd. Image pickup apparatus
US7142984B2 (en) 2005-02-08 2006-11-28 Harris Corporation Method and apparatus for enhancing a digital elevation model (DEM) for topographical modeling
US20070024612A1 (en) 2005-07-27 2007-02-01 Balfour Technologies Llc System for viewing a collection of oblique imagery in a three or four dimensional virtual scene
US7184072B1 (en) 2000-06-15 2007-02-27 Power View Company, L.L.C. Airborne inventory and inspection system and apparatus
US20070046448A1 (en) 2002-09-20 2007-03-01 M7 Visual Intelligence Vehicle based data collection and processing system and imaging sensor system and methods thereof
US7233691B2 (en) 1999-12-29 2007-06-19 Geospan Corporation Any aspect passive volumetric image processing method
US20070237420A1 (en) 2006-04-10 2007-10-11 Microsoft Corporation Oblique image stitching
WO2008028040A2 (en) 2006-08-30 2008-03-06 Pictometry International Corp. Mosaic oblique images and methods of making and using same
US20080120031A1 (en) 2006-11-16 2008-05-22 Daniel Rosenfeld Tracking method
US20080158256A1 (en) 2006-06-26 2008-07-03 Lockheed Martin Corporation Method and system for providing a perspective view image by intelligent fusion of a plurality of sensor data
US20080262789A1 (en) 2007-04-17 2008-10-23 Chris Pershing Aerial roof estimation system and method
US20090177458A1 (en) 2007-06-19 2009-07-09 Ch2M Hill, Inc. Systems and methods for solar mapping, determining a usable area for solar energy production and/or providing solar information
US20090208095A1 (en) 2008-02-15 2009-08-20 Microsoft Corporation Site modeling using image data fusion
US20090304227A1 (en) 2008-02-01 2009-12-10 Daniel Ian Kennedy Methods and Systems for Provisioning Energy Systems
US7728833B2 (en) 2004-08-18 2010-06-01 Sarnoff Corporation Method for generating a three-dimensional model of a roof structure
US20100208981A1 (en) 2009-02-13 2010-08-19 Harris Corporation Method for visualization of point cloud data based on scene content
US7832267B2 (en) 2007-04-25 2010-11-16 Ecometriks, Llc Method for determining temporal solar irradiance values
US20100296693A1 (en) 2009-05-22 2010-11-25 Thornberry Dale R System and process for roof measurement using aerial imagery
US7844499B2 (en) 2005-12-23 2010-11-30 Sharp Electronics Corporation Integrated solar agent business model
US20110033110A1 (en) 2008-04-23 2011-02-10 Pasco Corporation Building roof outline recognizing device, building roof outline recognizing method, and building roof outline recognizing program
US8078396B2 (en) 2004-08-31 2011-12-13 Meadow William D Methods for and apparatus for generating a continuum of three dimensional image data
US20130173632A1 (en) 2009-06-25 2013-07-04 University Of Tennessee Research Foundation Method and apparatus for predicting object properties and events using similarity-based information retrieval and modeling
US20130246204A1 (en) 2012-03-19 2013-09-19 Chris T. Thornberry Method and System for Quick Square Roof Reporting
US8553942B2 (en) * 2011-10-21 2013-10-08 Navteq B.V. Reimaging based on depthmap information
US20130300740A1 (en) * 2010-09-13 2013-11-14 Alt Software (Us) Llc System and Method for Displaying Data Having Spatial Coordinates
US8705843B2 (en) 2006-10-11 2014-04-22 Gta Geoinformatik Gmbh Method for texturizing virtual three-dimensional objects
US20140200861A1 (en) 2013-01-11 2014-07-17 CyberCity 3D, Inc. Computer-implemented system and method for roof modeling and asset management
US20140270492A1 (en) * 2013-03-15 2014-09-18 State Farm Mutual Automobile Insurance Company Automatic building assessment
US20150006117A1 (en) * 2013-07-01 2015-01-01 Here Global B.V. Learning Synthetic Models for Roof Style Classification Using Point Clouds
US9070018B1 (en) 2008-10-31 2015-06-30 Eagle View Technologies, Inc. Automated roof identification systems and methods
US9875509B1 (en) * 2014-10-09 2018-01-23 State Farm Mutual Automobile Insurance Company Method and system for determining the condition of insured properties in a neighborhood

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7639842B2 (en) * 2002-05-03 2009-12-29 Imagetree Corp. Remote sensing and probabilistic sampling based forest inventory method
US8494285B2 (en) * 2010-12-09 2013-07-23 The Hong Kong University Of Science And Technology Joint semantic segmentation of images and scan data
WO2012169294A1 (en) * 2011-06-09 2012-12-13 国立大学法人京都大学 Dtm estimation method, dtm estimation program, dtm estimation device, and method for creating 3-dimensional building model, as well as region extraction method, region extraction program, and region extraction device
US10663294B2 (en) * 2012-02-03 2020-05-26 Eagle View Technologies, Inc. Systems and methods for estimation of building wall area and producing a wall estimation report
US9933257B2 (en) * 2012-02-03 2018-04-03 Eagle View Technologies, Inc. Systems and methods for estimation of building wall area
US10515414B2 (en) * 2012-02-03 2019-12-24 Eagle View Technologies, Inc. Systems and methods for performing a risk management assessment of a property
EP3028464B1 (en) * 2013-08-02 2019-05-01 Xactware Solutions Inc. System and method for detecting features in aerial images using disparity mapping and segmentation techniques
US9384398B2 (en) * 2014-06-11 2016-07-05 Here Global B.V. Method and apparatus for roof type classification and reconstruction based on two dimensional aerial images
US9842282B2 (en) * 2015-05-22 2017-12-12 Here Global B.V. Method and apparatus for classifying objects and clutter removal of some three-dimensional images of the objects in a presentation
US10373011B2 (en) * 2015-08-26 2019-08-06 Onswitch Llc Automated accurate viable solar area determination
US10325370B1 (en) * 2016-05-31 2019-06-18 University Of New Brunswick Method and system of coregistration of remote sensing images

Patent Citations (194)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2273876A (en) 1940-02-12 1942-02-24 Frederick W Lutz Apparatus for indicating tilt of cameras
US3153784A (en) 1959-12-24 1964-10-20 Us Industries Inc Photo radar ground contour mapping system
US5345086A (en) 1962-11-28 1994-09-06 Eaton Corporation Automatic map compilation system
US3621326A (en) 1968-09-30 1971-11-16 Itek Corp Transformation system
US3594556A (en) 1969-01-08 1971-07-20 Us Navy Optical sight with electronic image stabilization
US3661061A (en) 1969-05-05 1972-05-09 Atomic Energy Commission Picture position finder
US3614410A (en) 1969-06-12 1971-10-19 Knight V Bailey Image rectifier
US3716669A (en) 1971-05-14 1973-02-13 Japan Eng Dev Co Mapping rectifier for generating polarstereographic maps from satellite scan signals
US3725563A (en) 1971-12-23 1973-04-03 Singer Co Method of perspective transformation in scanned raster visual display
US3864513A (en) 1972-09-11 1975-02-04 Grumman Aerospace Corp Computerized polarimetric terrain mapping system
US4015080A (en) 1973-04-30 1977-03-29 Elliott Brothers (London) Limited Display devices
US3866602A (en) 1973-05-29 1975-02-18 Olympus Optical Co Endoscope camera with orientation indicator
US3877799A (en) 1974-02-06 1975-04-15 United Kingdom Government Method of recording the first frame in a time index system
US4044879A (en) 1975-03-07 1977-08-30 Siemens Aktiengesellschaft Arrangement for recording characters using mosaic recording mechanisms
US4707698A (en) 1976-03-04 1987-11-17 Constant James N Coordinate measurement and radar device using image scanner
US4240108A (en) 1977-10-03 1980-12-16 Grumman Aerospace Corporation Vehicle controlled raster display system
US4184711A (en) 1977-10-14 1980-01-22 Yasuo Wakimoto Folding canvas chair
US4281354A (en) 1978-05-19 1981-07-28 Raffaele Conte Apparatus for magnetic recording of casual events relating to movable means
US4396942A (en) 1979-04-19 1983-08-02 Jackson Gates Video surveys
US4360876A (en) 1979-07-06 1982-11-23 Thomson-Csf Cartographic indicator system
US4344683A (en) 1979-09-29 1982-08-17 Agfa-Gevaert Aktiengesellschaft Quality control method and apparatus for photographic pictures
US4689748A (en) 1979-10-09 1987-08-25 Messerschmitt-Bolkow-Blohm Gesellschaft Mit Beschrankter Haftung Device for aircraft and spacecraft for producing a digital terrain representation
US4387056A (en) 1981-04-16 1983-06-07 E. I. Du Pont De Nemours And Company Process for separating zero-valent nickel species from divalent nickel species
US4382678A (en) 1981-06-29 1983-05-10 The United States Of America As Represented By The Secretary Of The Army Measuring of feature for photo interpretation
US4463380A (en) 1981-09-25 1984-07-31 Vought Corporation Image processing system
US4495500A (en) 1982-01-26 1985-01-22 Sri International Topographic data gathering method
US4490742A (en) 1982-04-23 1984-12-25 Vcs, Incorporated Encoding apparatus for a closed circuit television system
US4586138A (en) 1982-07-29 1986-04-29 The United States Of America As Represented By The United States Department Of Energy Route profile analysis system and method
US4491399A (en) 1982-09-27 1985-01-01 Coherent Communications, Inc. Method and apparatus for recording a digital signal on motion picture film
US4527055A (en) 1982-11-15 1985-07-02 Honeywell Inc. Apparatus for selectively viewing either of two scenes of interest
US4543603A (en) 1982-11-30 1985-09-24 Societe Nationale Industrielle Et Aerospatiale Reconnaissance system comprising an air-borne vehicle rotating about its longitudinal axis
US4489322A (en) 1983-01-27 1984-12-18 The United States Of America As Represented By The Secretary Of The Air Force Radar calibration using direct measurement equipment and oblique photometry
US4635136A (en) 1984-02-06 1987-01-06 Rochester Institute Of Technology Method and apparatus for storing a massive inventory of labeled images
US4814711A (en) 1984-04-05 1989-03-21 Deseret Research, Inc. Survey system and method for real time collection and processing of geophysicals data using signals from a global positioning satellite network
US4686474A (en) 1984-04-05 1987-08-11 Deseret Research, Inc. Survey system for collection and real time processing of geophysical data
US4673988A (en) 1985-04-22 1987-06-16 E.I. Du Pont De Nemours And Company Electronic mosaic imaging process
US4653136A (en) 1985-06-21 1987-03-31 Denison James W Wiper for rear view mirror
US4758850A (en) 1985-08-01 1988-07-19 British Aerospace Public Limited Company Identification of ground targets in airborne surveillance radar returns
US4953227A (en) 1986-01-31 1990-08-28 Canon Kabushiki Kaisha Image mosaic-processing method and apparatus
US4653316A (en) 1986-03-14 1987-03-31 Kabushiki Kaisha Komatsu Seisakusho Apparatus mounted on vehicles for detecting road surface conditions
US4688092A (en) 1986-05-06 1987-08-18 Ford Aerospace & Communications Corporation Satellite camera image navigation
US4956872A (en) 1986-10-31 1990-09-11 Canon Kabushiki Kaisha Image processing apparatus capable of random mosaic and/or oil-painting-like processing
US4805033A (en) 1987-02-18 1989-02-14 Olympus Optical Co., Ltd. Method of forming oblique dot pattern
US4814896A (en) 1987-03-06 1989-03-21 Heitzman Edward F Real time video data acquistion systems
US5164825A (en) 1987-03-30 1992-11-17 Canon Kabushiki Kaisha Image processing method and apparatus for mosaic or similar processing therefor
US4807024A (en) 1987-06-08 1989-02-21 The University Of South Carolina Three-dimensional display methods and apparatus
US4899296A (en) 1987-11-13 1990-02-06 Khattak Anwar S Pavement distress survey system
US4843463A (en) 1988-05-23 1989-06-27 Michetti Joseph A Land vehicle mounted audio-visual trip recorder
US5034812A (en) 1988-11-14 1991-07-23 Smiths Industries Public Limited Company Image processing utilizing an object data store to determine information about a viewed object
US4906198A (en) 1988-12-12 1990-03-06 International Business Machines Corporation Circuit board assembly and contact pin for use therein
US5486948A (en) 1989-03-24 1996-01-23 Canon Hanbai Kabushiki Kaisha Stereo image forming apparatus having a light deflection member in each optical path
US5617224A (en) 1989-05-08 1997-04-01 Canon Kabushiki Kaisha Imae processing apparatus having mosaic processing feature that decreases image resolution without changing image size or the number of pixels
US5121222A (en) 1989-06-14 1992-06-09 Toshiaki Endoh Method and apparatus for producing binary picture with detection and retention of plural binary picture blocks having a thin line pattern including an oblique line
US5166789A (en) 1989-08-25 1992-11-24 Space Island Products & Services, Inc. Geographical surveying using cameras in combination with flight computers to obtain images with overlaid geographical coordinates
US5296884A (en) 1990-02-23 1994-03-22 Minolta Camera Kabushiki Kaisha Camera having a data recording function
US5086314A (en) 1990-05-21 1992-02-04 Nikon Corporation Exposure control apparatus for camera
US5335072A (en) 1990-05-30 1994-08-02 Minolta Camera Kabushiki Kaisha Photographic system capable of storing information on photographed image data
US5210586A (en) 1990-06-27 1993-05-11 Siemens Aktiengesellschaft Arrangement for recognizing obstacles for pilots of low-flying aircraft
US5191174A (en) 1990-08-01 1993-03-02 International Business Machines Corporation High density circuit board and method of making same
US5200793A (en) 1990-10-24 1993-04-06 Kaman Aerospace Corporation Range finding array camera
US5155597A (en) 1990-11-28 1992-10-13 Recon/Optical, Inc. Electro-optical imaging array with motion compensation
US5267042A (en) 1991-01-11 1993-11-30 Pioneer Electronic Corporation Image pickup device for automatically recording the location where an image is recorded
US5265173A (en) 1991-03-20 1993-11-23 Hughes Aircraft Company Rectilinear object image matcher
US5369443A (en) 1991-04-12 1994-11-29 Abekas Video Systems, Inc. Digital video effects generator
US5353055A (en) 1991-04-16 1994-10-04 Nec Corporation Image pickup system with an image pickup device for control
US5555018A (en) 1991-04-25 1996-09-10 Von Braun; Heiko S. Large-scale mapping of parameters of multi-dimensional structures in natural environments
US5231435A (en) 1991-07-12 1993-07-27 Blakely Bruce W Aerial camera mounting apparatus
US5138444A (en) 1991-09-05 1992-08-11 Nec Corporation Image pickup system capable of producing correct image signals of an object zone
US5677515A (en) 1991-10-18 1997-10-14 Trw Inc. Shielded multilayer printed wiring board, high frequency, high isolation
US5402170A (en) 1991-12-11 1995-03-28 Eastman Kodak Company Hand-manipulated electronic camera tethered to a personal computer
US5247356A (en) 1992-02-14 1993-09-21 Ciampa John A Method and apparatus for mapping and measuring land
US5270756A (en) 1992-02-18 1993-12-14 Hughes Training, Inc. Method and apparatus for generating high resolution vidicon camera images
US5251037A (en) 1992-02-18 1993-10-05 Hughes Training, Inc. Method and apparatus for generating high resolution CCD camera images
US5506644A (en) 1992-08-18 1996-04-09 Olympus Optical Co., Ltd. Camera
US5481479A (en) 1992-12-10 1996-01-02 Loral Fairchild Corp. Nonlinear scanning to optimize sector scan electro-optic reconnaissance system performance
US5342999A (en) 1992-12-21 1994-08-30 Motorola, Inc. Apparatus for adapting semiconductor die pads and method therefor
US5414462A (en) 1993-02-11 1995-05-09 Veatch; John W. Method and apparatus for generating a comprehensive survey map
US5508736A (en) 1993-05-14 1996-04-16 Cooper; Roger D. Video signal processing apparatus for producing a composite signal for simultaneous display of data and video information
US5467271A (en) 1993-12-17 1995-11-14 Trw, Inc. Mapping and analysis system for precision farming applications
US5633946A (en) 1994-05-19 1997-05-27 Geospan Corporation Method and apparatus for collecting and processing visual and spatial position information from a moving platform
US5899945A (en) 1995-04-17 1999-05-04 Space Systems/Loral, Inc. Attitude control and navigation system for high resolution imaging
US5604534A (en) 1995-05-24 1997-02-18 Omni Solutions International, Ltd. Direct digital airborne panoramic camera system and method
US5668593A (en) 1995-06-07 1997-09-16 Recon/Optical, Inc. Method and camera system for step frame reconnaissance with motion compensation
US5963664A (en) 1995-06-22 1999-10-05 Sarnoff Corporation Method and system for image combination using a parallax-based technique
US5835133A (en) 1996-01-23 1998-11-10 Silicon Graphics, Inc. Optical system for single camera stereo video
US5894323A (en) 1996-03-22 1999-04-13 Tasc, Inc, Airborne imaging system using global positioning system (GPS) and inertial measurement unit (IMU) data
US5844602A (en) 1996-05-07 1998-12-01 Recon/Optical, Inc. Electro-optical imaging array and camera system with pitch rate image motion compensation which can be used in an airplane in a dive bomb maneuver
US6088055A (en) 1996-05-07 2000-07-11 Recon /Optical, Inc. Electro-optical imaging array and camera system with pitch rate image motion compensation
US5798786A (en) 1996-05-07 1998-08-25 Recon/Optical, Inc. Electro-optical imaging detector array for a moving vehicle which includes two axis image motion compensation and transfers pixels in row directions and column directions
US5841574A (en) 1996-06-28 1998-11-24 Recon/Optical, Inc. Multi-special decentered catadioptric optical system
US6373522B2 (en) 1996-11-05 2002-04-16 Bae Systems Information And Electronic Systems Integration Inc. Electro-optical reconnaissance system with forward motion compensation
US6108032A (en) 1996-11-05 2000-08-22 Lockheed Martin Fairchild Systems System and method for image motion compensation of a CCD image sensor
US6256057B1 (en) 1996-11-05 2001-07-03 Lockhead Martin Corporation Electro-optical reconnaissance system with forward motion compensation
US6236886B1 (en) 1996-12-11 2001-05-22 Technology Commercialization International Method for producing a tomographic image of the body and electric impedance tomograph
US6222583B1 (en) 1997-03-27 2001-04-24 Nippon Telegraph And Telephone Corporation Device and system for labeling sight images
US6597818B2 (en) 1997-05-09 2003-07-22 Sarnoff Corporation Method and apparatus for performing geo-spatial registration of imagery
US6097854A (en) 1997-08-01 2000-08-01 Microsoft Corporation Image mosaic construction system and apparatus with patch-based alignment, global block adjustment and pair-wise motion-based local warping
US6157747A (en) 1997-08-01 2000-12-05 Microsoft Corporation 3-dimensional image rotation method and apparatus for producing image mosaics
WO1999018732A1 (en) 1997-10-06 1999-04-15 Ciampa John A Digital-image mapping
US5852753A (en) 1997-11-10 1998-12-22 Lo; Allen Kwok Wah Dual-lens camera with shutters for taking dual or single images
US6434280B1 (en) 1997-11-10 2002-08-13 Gentech Corporation System and method for generating super-resolution-enhanced mosaic images
US6037945A (en) 1997-12-16 2000-03-14 Xactware, Inc. Graphical method for modeling and estimating construction costs
US6816819B1 (en) 1997-12-16 2004-11-09 Xactware, Inc. Graphical method and system for modeling and estimating construction parameters
US6094215A (en) 1998-01-06 2000-07-25 Intel Corporation Method of determining relative camera orientation position to create 3-D visual images
US6130705A (en) 1998-07-10 2000-10-10 Recon/Optical, Inc. Autonomous electro-optical framing camera system with constant ground resolution, unmanned airborne vehicle therefor, and methods of use
US20060250515A1 (en) 1998-09-16 2006-11-09 Olympus Optical Co., Ltd. Image pickup apparatus
US20020114536A1 (en) 1998-09-25 2002-08-22 Yalin Xiong Aligning rectilinear images in 3D through projective registration and calibration
EP1010966A1 (en) 1998-12-15 2000-06-21 Aerowest GmbH Method for generating a three dimensional object description
US6167300A (en) 1999-03-08 2000-12-26 Tci Incorporated Electric mammograph
DE60017384T2 (en) 1999-03-08 2006-03-02 Tci Inc., Albuquerque ELECTRIC MAMMOGRAPH
EP1180967A1 (en) 1999-03-08 2002-02-27 TCI Incorporated Electric mammograph
CA2402234A1 (en) 1999-03-08 2000-09-14 Tci Incorporated Electric mammograph
WO2000053090A1 (en) 1999-03-08 2000-09-14 Tci Incorporated Electric mammograph
US6731329B1 (en) 1999-05-14 2004-05-04 Zsp Geodaetische Systeme Gmbh Method and an arrangement for determining the spatial coordinates of at least one object point
US7061650B2 (en) 1999-05-25 2006-06-13 Silverbrook Research Pty Ltd Method and apparatus for bayer mosaic image conversion
US7123382B2 (en) 1999-05-25 2006-10-17 Silverbrook Research Pty Ltd Method for bayer mosaic image conversion
US20050073241A1 (en) 1999-06-21 2005-04-07 Semiconductor Energy Laboratory Co., Ltd. EL display device, driving method thereof, and electronic equipment provided with the display device
US6639596B1 (en) 1999-09-20 2003-10-28 Microsoft Corporation Stereo reconstruction from multiperspective panoramas
US7233691B2 (en) 1999-12-29 2007-06-19 Geospan Corporation Any aspect passive volumetric image processing method
US6829584B2 (en) 1999-12-31 2004-12-07 Xactware, Inc. Virtual home data repository and directory
US6826539B2 (en) 1999-12-31 2004-11-30 Xactware, Inc. Virtual structure data repository and directory
US6810383B1 (en) 2000-01-21 2004-10-26 Xactware, Inc. Automated task management and evaluation
US6876763B2 (en) 2000-02-03 2005-04-05 Alst Technical Excellence Center Image resolution improvement using a color mosaic sensor
US6711475B2 (en) 2000-03-16 2004-03-23 The Johns Hopkins University Light detection and ranging (LIDAR) mapping system
US20020041328A1 (en) 2000-03-29 2002-04-11 Astrovision International, Inc. Direct broadcast imaging satellite system apparatus and method for providing real-time, continuous monitoring of earth from geostationary earth orbit and related services
US7184072B1 (en) 2000-06-15 2007-02-27 Power View Company, L.L.C. Airborne inventory and inspection system and apparatus
US6834128B1 (en) 2000-06-16 2004-12-21 Hewlett-Packard Development Company, L.P. Image mosaicing system and method adapted to mass-market hand-held digital cameras
US20030088362A1 (en) 2000-08-16 2003-05-08 Imagelinks, Inc. 3-dimensional interactive image modeling system
US20020041717A1 (en) 2000-08-30 2002-04-11 Ricoh Company, Ltd. Image processing method and apparatus and computer-readable storage medium using improved distortion correction
US6421610B1 (en) 2000-09-15 2002-07-16 Ernest A. Carroll Method of preparing and disseminating digitized geospatial data
US7065260B2 (en) 2000-10-27 2006-06-20 Microsoft Corporation Rebinning methods and arrangements for use in compressing image-based rendering (IBR) data
US7009638B2 (en) 2001-05-04 2006-03-07 Vexcel Imaging Gmbh Self-calibrating, digital, large format camera with single or multiple detector arrays and single or multiple optical systems
US7046401B2 (en) 2001-06-01 2006-05-16 Hewlett-Packard Development Company, L.P. Camera-based document scanning system using multiple-pass mosaicking
US7509241B2 (en) 2001-07-06 2009-03-24 Sarnoff Corporation Method and apparatus for automatically generating a site model
US20030014224A1 (en) 2001-07-06 2003-01-16 Yanlin Guo Method and apparatus for automatically generating a site model
US20030043824A1 (en) 2001-08-31 2003-03-06 Remboski Donald J. Vehicle active network and device
US6747686B1 (en) 2001-10-05 2004-06-08 Recon/Optical, Inc. High aspect stereoscopic mode camera and method
US20030214585A1 (en) 2002-01-09 2003-11-20 Bakewell Charles Adams Mobile enforcement platform with aimable violation identification and documentation system for multiple traffic violation types across all lanes in moving traffic, generating composite display images and data to support citation generation, homeland security, and monitoring
US7262790B2 (en) 2002-01-09 2007-08-28 Charles Adams Bakewell Mobile enforcement platform with aimable violation identification and documentation system for multiple traffic violation types across all lanes in moving traffic, generating composite display images and data to support citation generation, homeland security, and monitoring
US7133551B2 (en) 2002-02-07 2006-11-07 National Central University Semi-automatic reconstruction method of 3-D building models using building outline segments
US20030164962A1 (en) 2002-03-01 2003-09-04 Nims Jerry C. Multiple angle display produced from remote optical sensing devices
JP2003317089A (en) 2002-04-24 2003-11-07 Dainippon Printing Co Ltd Method and system for image correction
US20070046448A1 (en) 2002-09-20 2007-03-01 M7 Visual Intelligence Vehicle based data collection and processing system and imaging sensor system and methods thereof
US20040167709A1 (en) 2002-09-20 2004-08-26 M7 Visual Intelligence, Lp Vehicle based data collection and processing system
US7127348B2 (en) 2002-09-20 2006-10-24 M7 Visual Intelligence, Lp Vehicle based data collection and processing system
EP1418402B1 (en) 2002-11-08 2006-06-21 Pictometry International Corp. Method and apparatus for capturing, geolocating and measuring oblique images
MXPA05004987A (en) 2002-11-08 2006-02-17 Pictometry Int Corp Oblique geolocation and measurement system.
CN1735897A (en) 2002-11-08 2006-02-15 皮克托米特里国际公司 Oblique geolocation and measurement system
WO2004044692A2 (en) 2002-11-08 2004-05-27 Pictometry International Corp. Oblique geolocation and measurement system
ES2266704T3 (en) 2002-11-08 2007-03-01 Pictometry International Corp. PROCEDURE AND APPARATUS TO CAPTURE, GEOLOCALIZE AND MEASURE OBLIQUE IMAGES.
ATE331204T1 (en) 2002-11-08 2006-07-15 Pictometry Int Corp METHOD AND DEVICE FOR CAPTURE, GEOLOCALIZATION AND MEASURING OBLIQUELY TAKEN IMAGES
EP1696204A2 (en) 2002-11-08 2006-08-30 Pictometry International Corp. Method and apparatus for capturing, geolocating and measuring oblique images
BR0316110A (en) 2002-11-08 2005-09-27 Pictometry Int Corp Systems for capturing images and geolocation data corresponding to them and for viewing, geolocating and measuring based on oblique captured images and methods for taking measurements within an obliquely viewed image and for capturing oblique images of an area of interest.
DK1418402T3 (en) 2002-11-08 2006-10-23 Pictometry Int Corp Method and apparatus for recording, geolocating and measuring oblique images
CA2505566A1 (en) 2002-11-08 2004-05-27 Stephen Schultz Oblique geolocation and measurement system
DE60306301T2 (en) 2002-11-08 2006-11-16 Pictometry International Corp. Method and device for detecting, geolocating and measuring obliquely recorded images
US20040105090A1 (en) 2002-11-08 2004-06-03 Schultz Stephen L. Method and apparatus for capturing, geolocating and measuring oblique images
US7046841B1 (en) 2003-08-29 2006-05-16 Aerotec, Llc Method and system for direct classification from three dimensional digital imaging
US7018050B2 (en) 2003-09-08 2006-03-28 Hewlett-Packard Development Company, L.P. System and method for correcting luminance non-uniformity of obliquely projected images
US20050088251A1 (en) 2003-10-23 2005-04-28 Nihon Dempa Kogyo Co., Ltd. Crystal oscillator
US20050169521A1 (en) 2004-01-31 2005-08-04 Yacov Hel-Or Processing of mosaic digital images
WO2005088251A1 (en) 2004-02-27 2005-09-22 Intergraph Software Technologies Company Forming a single image from overlapping images
US20060028550A1 (en) 2004-08-06 2006-02-09 Palmer Robert G Jr Surveillance system and method
US7728833B2 (en) 2004-08-18 2010-06-01 Sarnoff Corporation Method for generating a three-dimensional model of a roof structure
US8078396B2 (en) 2004-08-31 2011-12-13 Meadow William D Methods for and apparatus for generating a continuum of three dimensional image data
US20060092043A1 (en) 2004-11-03 2006-05-04 Lagassey Paul J Advanced automobile accident detection, data recordation and reporting system
US7348895B2 (en) 2004-11-03 2008-03-25 Lagassey Paul J Advanced automobile accident detection, data recordation and reporting system
US7142984B2 (en) 2005-02-08 2006-11-28 Harris Corporation Method and apparatus for enhancing a digital elevation model (DEM) for topographical modeling
US20060238383A1 (en) 2005-04-21 2006-10-26 Microsoft Corporation Virtual earth rooftop overlay and bounding
US20070024612A1 (en) 2005-07-27 2007-02-01 Balfour Technologies Llc System for viewing a collection of oblique imagery in a three or four dimensional virtual scene
US7844499B2 (en) 2005-12-23 2010-11-30 Sharp Electronics Corporation Integrated solar agent business model
US20070237420A1 (en) 2006-04-10 2007-10-11 Microsoft Corporation Oblique image stitching
US20080158256A1 (en) 2006-06-26 2008-07-03 Lockheed Martin Corporation Method and system for providing a perspective view image by intelligent fusion of a plurality of sensor data
WO2008028040A2 (en) 2006-08-30 2008-03-06 Pictometry International Corp. Mosaic oblique images and methods of making and using same
US20080123994A1 (en) 2006-08-30 2008-05-29 Stephen Schultz Mosaic Oblique Images and Methods of Making and Using Same
US8705843B2 (en) 2006-10-11 2014-04-22 Gta Geoinformatik Gmbh Method for texturizing virtual three-dimensional objects
US20080120031A1 (en) 2006-11-16 2008-05-22 Daniel Rosenfeld Tracking method
US20080262789A1 (en) 2007-04-17 2008-10-23 Chris Pershing Aerial roof estimation system and method
US7832267B2 (en) 2007-04-25 2010-11-16 Ecometriks, Llc Method for determining temporal solar irradiance values
US20090177458A1 (en) 2007-06-19 2009-07-09 Ch2M Hill, Inc. Systems and methods for solar mapping, determining a usable area for solar energy production and/or providing solar information
US20090304227A1 (en) 2008-02-01 2009-12-10 Daniel Ian Kennedy Methods and Systems for Provisioning Energy Systems
US20090208095A1 (en) 2008-02-15 2009-08-20 Microsoft Corporation Site modeling using image data fusion
US20110033110A1 (en) 2008-04-23 2011-02-10 Pasco Corporation Building roof outline recognizing device, building roof outline recognizing method, and building roof outline recognizing program
US9070018B1 (en) 2008-10-31 2015-06-30 Eagle View Technologies, Inc. Automated roof identification systems and methods
US20100208981A1 (en) 2009-02-13 2010-08-19 Harris Corporation Method for visualization of point cloud data based on scene content
US20100296693A1 (en) 2009-05-22 2010-11-25 Thornberry Dale R System and process for roof measurement using aerial imagery
US20130173632A1 (en) 2009-06-25 2013-07-04 University Of Tennessee Research Foundation Method and apparatus for predicting object properties and events using similarity-based information retrieval and modeling
US20130300740A1 (en) * 2010-09-13 2013-11-14 Alt Software (Us) Llc System and Method for Displaying Data Having Spatial Coordinates
US8553942B2 (en) * 2011-10-21 2013-10-08 Navteq B.V. Reimaging based on depthmap information
US20130246204A1 (en) 2012-03-19 2013-09-19 Chris T. Thornberry Method and System for Quick Square Roof Reporting
US20140200861A1 (en) 2013-01-11 2014-07-17 CyberCity 3D, Inc. Computer-implemented system and method for roof modeling and asset management
US20140270492A1 (en) * 2013-03-15 2014-09-18 State Farm Mutual Automobile Insurance Company Automatic building assessment
US20150006117A1 (en) * 2013-07-01 2015-01-01 Here Global B.V. Learning Synthetic Models for Roof Style Classification Using Point Clouds
US9875509B1 (en) * 2014-10-09 2018-01-23 State Farm Mutual Automobile Insurance Company Method and system for determining the condition of insured properties in a neighborhood

Non-Patent Citations (100)

* Cited by examiner, † Cited by third party
Title
"Airvideo Analysis", MicroImages, Inc., Lincoln, NE, 1 page, Dec. 1992.
"Image Measurement and Aerial Photography", Magazine for all branches of Photogrammetry and its fringe areas, Organ of the German Photogrammetry Association, Berlin-Wilmersdorf, No. 1, 1958.
"Mobile Mapping Systems Lesson 4", Lesson 4 Sure 382 Geographic Information Systems II, pp. 1-29, Jul. 2, 2006.
"POS AV" Applanix, Product Outline, airborne@applanix.com, 3 pages, Mar. 28, 2007.
"POS AV" georeferenced by Applanix aided inertial technology, http://www.applanix.com/products/posav_index.php.
"POSTrack V5 Specifications" 2005.
"Protecting Natural Resources with Remote Sensing", Proceeding of the Third Forest Service Remote Sensing Applications Conference—Apr. 9-13, 1990.
"Remote Sensing for Resource Inventory Planning and Monitoring", Proceeding of the Second Forest Service Remote Sensing Applications Conference—Slidell, Louisiana and NSTL, Mississippi, Apr. 11-15, 1988.
"RGB Spectrum Videographics Report, vol. 4, No. 1, McDonnell Douglas Integrates RGB Spectrum Systems in Helicopter Simulators", pp. 1-6, 1995.
"Standards for Digital Orthophotos", National Mapping Program Technical Instructions, US Department of the Interior, Dec. 1996.
"The First Scan Converter with Digital Video Output", Introducing . . . The RGB/Videolink 1700D-1, RGB Spectrum, 2 pages, 1995.
Ackermann, Prospects of Kinematic GPS Aerial Triangulation, ITC Journal, 1992.
AeroDach®Online Roof Evaluation Standard Delivery Format and 3D Data File: Document Version 01.00.2002 with publication in 2002, 13 pages.
Aerowest Pricelist of Geodata as of Oct. 21, 2005 and translations to English 3 pages.
Anonymous, "Live automatic coordinates for aerial images," Advanced Imaging, 12(6):51, Jun. 1997.
Anonymous, "Pictometry and US Geological Survey announce—Cooperative Research and Development Agreement," Press Release published Oct. 20, 1999.
Applanix Corp, Robust, Precise Position and Orientation Solutions, POS/AV & POS/DG Installation & Operation Manual, Redefining the way you survey, May 19, 1999, Ontario, Canada.
Applanix Corp, Robust, Precise Position and Orientation Solutions, POS/AV V4 Ethernet & Disk Logging ICD, Redefining the way you survey, Revision 3, Apr. 18, 2001, Ontario, Canada.
Applicad Online Product Bulletin archive from Jan. 7, 2003, 4 pages.
Applicad Reports dated Nov. 25, 1999-Mar. 9, 2005, 50 pages.
Applicad Sorcerer Guide, Version 3, Sep. 8, 1999, 142 pages.
Artes F., & Hutton, J., "GPS and Inertial Navigation Delivering", Sep. 2005, GEOconnexion International Magazine, p. 52-53, Sep. 2005.
Bignone et al, Automatic Extraction of Generic House Roofs from High Resolution Aerial Imagery, Communication Technology Laboratory, Swiss Federal Institute of Technology ETH, CH-8092 Zurich, Switzerland, 12 pages, 1996.
Ciampa, J. A., Oversee, Presented at Reconstruction After Urban earthquakes, Buffalo, NY, 1989.
Ciampa, John A., "Pictometry Digital Video Mapping", SPIE, vol. 2598, pp. 140-148, 1995.
Dillow, "Grin, or bare it, for aerial shot," Orange County Register (California), Feb. 25, 200I.
Dunford et al., Remote Sensing for Rural Development Planning in Africa, The Journal for the International Institute for Aerial Survey and Earth Sciences, 2:99-108, 1983.
ERDAS Field Guide, Version 7.4, A Manual for a commercial image processing system, 1990.
Gagnon, P.A., Agnard, J. P., Nolette, C., & Boulianne, M., "A Micro-Computer based General Photogrammetric System", Photogrammetric Engineering and Remote Sensing, vol. 56, No. 5, pp. 623-625, 1990.
Garrett, "Pictometry: Aerial photography on steroids," Law Enforcement Technology 29(7):114-116, Jul. 2002.
Geospan 2007 Job proposal.
Graham, Horita TRG-50 SMPTE Time-Code Reader, Generator, Window Inserter, 1990.
Graham, Lee A., "Airborne Video for Near-Real-Time Vegetation Mapping", Journal of Forestry, 8:28-32, 1993.
Greening et al., Commercial Applications of GPS-Assisted Photogrammetry, Presented at GIS/LIS Annual Conference and Exposition, Phoenix, AZ, Oct. 1994.
Heipke, et al, "Test Goals and Test Set Up for the OEEPE Test—Integrated Sensor Orientation", 1999.
Hess, L.L, et al., "Geocoded Digital Videography for Validation of Land Cover Mapping in the Amazon Basin", International Journal of Remote Sensing, vol. 23, No. 7, pp. 1527-1555, 2002.
Hiatt, "Sensor Integration Aids Mapping at Ground Zero", Photogrammetric Engineering and Remote Sensing, Sep. 2002, p. 877-878.
Hinthorne, J., et al., "Image Processing in the Grass GIS", Geoscience and Remote Sensing Symposium, 4:2227-2229, 1991.
Imhof, Ralph K., "Mapping from Oblique Photographs", Manual of Photogrammetry, Chapter 18, 1966.
International Search Report and Written Opinion regarding PCT App. No. PCT/US2017/017196 dated Apr. 25, 2017, 25 pages.
Jensen, John R., Introductory Digital Image Processing: A Remote Sensing Perspective, Prentice-Hall, 1986; 399 pages.
Konecny, G., "Analytical Aerial Triangulation with Convergent Photography", Department of Surveying Engineering, University of New Brunswick, pp. 37-57, 1966.
Konecny, G., "Interior Orientation and Convergent Photography", Photogrammetric Engineering, pp. 625-634, 1965.
Konecny, G., "Issues of Digital Mapping", Leibniz University Hannover, Germany, GIS Ostrava 2008, Ostrava 27.—Jan. 30, 2008, pp. 1-8.
Konecny, G., "Mechanische Radialtriangulation mit Konvergentaufnahmen", Bildmessung und Luftbildwesen, 1958, Nr. 1.
Kumar, et al., "Registration of Video to Georeferenced Imagery", Sarnoff Corporation, CN5300, Princeton, NJ, 1998.
Lapine, Lewis A., "Practical Photogrammetric Control by Kinematic GPS", GPS World, 1(3):44-49, 1990.
Lapine, Lewis A., Airborne Kinematic GPS Positioning for Photogrammetry—The Determination of the Camera Exposure Station, Silver Spring, MD, 11 pages, at least as early as 2000.
Linden et al., Airborne Video Automated Processing, US Forest Service Internal report, Fort Collins, CO, 1993.
McConnel, Proceedings Aerial Pest Detection and Monitoring Workshop—1994.pdf, USDA Forest Service Forest Pest Management, Northern Region, Intermountain region, Forest Insects and Diseases, Pacific Northwest Region.
Miller, "Digital software gives small Arlington the Big Picture," Government Computer NewsState & Local, 7(12), Dec. 2001.
Mostafa, "Camera/IMU Boresight Calibration: New Advances and Performance Analysis", Proceedings of the ASPRS Annual Meeting, Washington, D.C., Apr. 21-26, 2002.
Mostafa, "ISAT Direct Exterior Orientation QA/QC Strategy Using POS Data", Proceedings of OEEPE Workshop: Integrated Sensor Orientation, Hanover, Germany, Sep. 17-18, 2001.
Mostafa, "Precision Aircraft GPS Positioning Using CORS", Photogrammetric Engineering and Remote Sensing, Nov. 2002, p. 1125-1126.
Mostafa, et al., "Airborne DGPS Without Dedicated Base Stations for Mapping Applications", Proceedings of ION-GPS 2001, Salt Lake City, Utah, USA, Sep. 11-14.
Mostafa, et al., "Airborne Direct Georeferencing of Frame Imagery: An Error Budget", The 3rd International Symposium on Mobile Mapping Technology, Cairo, Egypt, Jan. 3-5, 2001.
Mostafa, et al., "Digital image georeferencing from a multiple camera system by GPS/INS," ISP RS Journal of Photogrammetry & Remote Sensing, 56(1): I-12, Jun. 2001.
Mostafa, et al., "Direct Positioning and Orientation Systems How do they Work? What is the Attainable Accuracy?", Proceeding, American Society of Photogrammetry and Remote Sensing Annual Meeting, St. Louis, MO, Apr. 24-27, 2001.
Mostafa, et al., "Ground Accuracy from Directly Georeferenced Imagery", Published in GIM International vol. 14 N. 12 Dec. 2000.
Mostafa, et al., System Performance Analysis of INS/DGPS Integrated System for Mobile Mapping System (MMS), Department of Geomatics Engineering, University of Calgary, Commission VI, WG VI/4, Mar. 2004.
Mostafa, M.R. and Hutton, J., "Airborne Kinematic Positioning and Attitude Determination Without Base Stations", Proceedings, International Symposium on Kinematic Systems in Geodesy, Geomatics and Navigation (KIS 2001) Banff, Alberta, Canada, Jun. 4-8, 2001.
Myhre et al., "Airborne Video Technology", Forest Pest Management/Methods Application Group, Fort Collins, CO, pp. 1-6, at least as early as Jul. 30, 2006.
Myhre et al., "Airborne Videography—A Potential Tool for Resource Managers"—Proceedings: Resource Technology 90, 2nd International Symposium on Advanced Technology in Natural Resource Management, 5 pages, 1990.
Myhre et al., "An Airborne Video System Developed Within Forest Pest Management—Status and Activities", 10 pages, 1992.
Myhre et al., Aerial Photography for Forest Pest Management, Proceedings of Second Forest Service Remote Sensing Applications Conference, Slidell, Louisiana, 153-162, 1988.
Myhre, "ASPRS/ACSM/RT 92" Technical papers, Washington, D.C., vol. 5 Resource Technology 92, Aug. 3-8, 1992.
Myhre, Dick, "Airborne Video System Users Guide", USDA Forest Service, Forest Pest Management Applications Group, published by Management Assistance Corporation of America, 6 pages, 1992.
Noronha et al., "Detection and Modeling of Building from Multiple Aerial Images, " Institute for Robotics and Intelligent Systems, University of Southern California, Nov. 27, 2001, 32 pages.
Norton-Griffiths et al., "Aerial Point Sampling for Land Use Surveys", Journal of Biogeography, 15:149-156, 1988.
Norton-Griffiths et al., 1982. "Sample surveys from light aircraft combining visual observations and very large scale color photography". University of Arizona Remote Sensing Newsletter 82-2:1-4.
Novak, Rectification of Digital Imagery, Photogrammetric Engineering and Remote Sensing, 339-344, 1992.
POS AV "Digital Frame Camera Applications", 3001 Inc., Brochure, 2007.
POS AV "Digital Scanner Applications", Earthdata Brochure, Mar. 2007.
POS AV "Film Camera Applications" AeroMap Brochure, Mar. 2007.
POS AV "LIDAR Applications" MD Atlantic Brochure, Mar. 2007.
POS AV "OEM System Specifications", 2005.
POS AV "Synthetic Aperture Radar Applications", Overview, Orbisat Brochure, Mar. 2007.
POSTrack, "Factsheet", Applanix, Ontario, Canada, www.applanix.com, Mar. 2007.
Rattigan, "Towns get new view from above," The Boston Globe, Sep. 5, 2002.
Reed, "Firm gets latitude to map O.C. in 3D," Orange County Register (California), Sep. 27, 2000.
Reyes, "Orange County freezes ambitious aerial photography project," Los Angeles Times, Oct. 16, 2000.
RGB "Computer Wall", RGB Spectrum, 4 pages, 1995.
Sampath et al. Segmentation and Reconstruction of Polyhedral Building Roofs from Aerial Lidar Point Clouds. IEEE Transactions on Geoscience and Remote Sensing. vol. 48, Issue 3, pp. 1554-1567, Nov. 3, 2009. Retrieved from the internet: https://www.researchgate.net/profile/Ajit_Sampath/publication/272818639_IEEE_Roof_Reconstruction/links/54ef7bc70cf2495330e27871.pdf.
Slaymaker et al., Mapping Deciduous Forests in Southern New England using Aerial Videography and Hyperclustered Multi-Temporal Landsat TM Imagery, Department of Forestry and Wildlife Management, University of Massachusetts, 1996.
Slaymaker, Dana M., "Point Sampling Surveys with GPS-logged Aerial Videography", Gap Bulletin number 5, University of Idaho, http://www.gap.uidaho.edu/Bulletins/5/PSSwGPS.html, 1996.
Slaymaker, et al., "A System for Real-time Generation of Geo-referenced Terrain Models", 4232A-08, SPIE Enabling Technologies for Law Enforcement Boston, MA, ftp://vis-ftp.cs.umass.edu/Papers/schultz/spie2000.pdf, 2000.
Slaymaker, et al., "Calculating Forest Biomass With Small Format Aerial Photography, Videography and a Profiling Laser", In Proceedings of the 17th Biennial Workshop on Color Photography and Videography in Resource Assessment, Reno, NV, 1999.
Slaymaker, et al., "Cost-effective Determination of Biomass from Aerial Images", Lecture Notes in Computer Science, 1737:67-76, http://portal.acm.org/citation.cfm?id=648004.743267&coll=GUIDE&dl=1999.
Slaymaker, et al., "Madagascar Protected Areas Mapped with GPS-logged Aerial Video and 35mm Air Photos", Earth Observation magazine, vol. 9, No. 1, http://www.eomonline.com/Common/Archives/2000jan/00jan_tableofcontents.html, pp. 1-4, 2000.
Slaymaker, et al.,"Integrating Small Format Aerial Photography, Videography, and a Laser Profiler for Environmental Monitoring", In ISPRS WG III/1 Workshop on Integrated Sensor Calibration and Orientation, Portland, Maine, 1999.
Star et al., "Geographic Information Systems an Introduction", Prentice-Hall, 1990.
Tao, "Mobile Mapping Technology for Road Network Data Acquisition", Journal of Geospatial Engineering, vol. 2, No. 2, pp. 1-13, 2000.
Tomasi et al., "Shape and Motion from Image Streams: a Factorization Method"—Full Report on the Orthographic Case, pp. 9795-9802, 1992.
Warren, Fire Mapping with the Fire Mousetrap, Aviation and Fire Management, Advanced Electronics System Development Group, USDA Forest Service, 1986.
Weaver, "County gets an eyeful," The Post-Standard (Syracuse, NY), May 18, 2002.
Welch, R., "Desktop Mapping with Personal Computers", Photogrammetric Engineering and Remote Sensing, 1651-1662, 1989.
Westervelt, James, "Introduction to Grass 4", pp. 1-25, 1991.
www.archive.org Web site showing archive of German AeroDach Web Site http://www.aerodach.de from Jun. 13, 2004 (retrieved Sep. 20, 2012) and translations to English 4 pages.
Xactimate Claims Estimating Software archive from Feb. 12, 2010, 8 pages.
Zhu, Zhigang, Hanson, Allen R., "Mosaic-Based 3D Scene Representation and Rendering", Image Processing, 2005, ICIP 2005, IEEE International Conference on 1(2005).

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11568639B2 (en) 2015-08-31 2023-01-31 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
US11151378B2 (en) 2015-08-31 2021-10-19 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
US20170177748A1 (en) * 2015-12-16 2017-06-22 Wal-Mart Stores, Inc. Residential Upgrade Design Tool
US10521694B2 (en) * 2016-09-09 2019-12-31 The Chinese University Of Hong Kong 3D building extraction apparatus, method and system
US11030491B2 (en) 2016-09-23 2021-06-08 Aon Benfield Inc. Platform, systems, and methods for identifying property characteristics and property feature conditions through imagery analysis
US11551040B2 (en) 2016-09-23 2023-01-10 Aon Benfield Inc. Platform, systems, and methods for identifying characteristics and conditions of property features through imagery analysis
US10650285B1 (en) 2016-09-23 2020-05-12 Aon Benfield Inc. Platform, systems, and methods for identifying property characteristics and property feature conditions through aerial imagery analysis
US11195058B2 (en) 2016-09-23 2021-12-07 Aon Benfield Inc. Platform, systems, and methods for identifying property characteristics and property feature conditions through aerial imagery analysis
US11687768B2 (en) 2016-09-23 2023-06-27 Aon Benfield, Inc. Platform, systems, and methods for identifying characteristics and conditions of property features through imagery analysis
US10529029B2 (en) 2016-09-23 2020-01-07 Aon Benfield Inc. Platform, systems, and methods for identifying property characteristics and property feature maintenance through aerial imagery analysis
US11347976B2 (en) 2016-09-23 2022-05-31 Aon Benfield Inc. Platform, systems, and methods for identifying characteristics and conditions of property features through imagery analysis
US11853889B2 (en) 2016-09-23 2023-12-26 Aon Benfield Inc. Platform, systems, and methods for identifying characteristics and conditions of property features through imagery analysis
US10783648B2 (en) * 2018-03-05 2020-09-22 Hanwha Techwin Co., Ltd. Apparatus and method for processing image
US11308714B1 (en) * 2018-08-23 2022-04-19 Athenium Llc Artificial intelligence system for identifying and assessing attributes of a property shown in aerial imagery
WO2020061518A1 (en) 2018-09-21 2020-03-26 Eagle View Technologies, Inc. Method and system for determining solar access of a structure
US11551413B2 (en) 2018-09-21 2023-01-10 Eagle View Technologies, Inc. Method and system for determining solar access of a structure
US11312379B2 (en) * 2019-02-15 2022-04-26 Rockwell Collins, Inc. Occupancy map synchronization in multi-vehicle networks
US20210325182A1 (en) * 2019-03-27 2021-10-21 Chengdu Rainpoo Technology Co., Ltd. Aerial survey method and apparatus capable of eliminating redundant aerial photos
US11927442B2 (en) * 2019-03-27 2024-03-12 Chengdu Rainpoo Technology Co., Ltd. Aerial survey method and apparatus capable of eliminating redundant aerial photos
US11232150B2 (en) 2020-04-10 2022-01-25 Cape Analytics, Inc. System and method for geocoding
US11640667B2 (en) 2020-06-02 2023-05-02 Cape Analytics, Inc. Method for property feature segmentation
US11222426B2 (en) 2020-06-02 2022-01-11 Cape Analytics, Inc. Method for property feature segmentation
US11367265B2 (en) 2020-10-15 2022-06-21 Cape Analytics, Inc. Method and system for automated debris detection
US11875413B2 (en) 2021-07-06 2024-01-16 Cape Analytics, Inc. System and method for property condition analysis
US20230059652A1 (en) * 2021-08-19 2023-02-23 Forest Carbon Works, PBC Systems and methods for forest surveying
US11861843B2 (en) 2022-01-19 2024-01-02 Cape Analytics, Inc. System and method for object analysis

Also Published As

Publication number Publication date
EP3403050A4 (en) 2019-08-21
US20190377966A1 (en) 2019-12-12
US20210089805A1 (en) 2021-03-25
US10796189B2 (en) 2020-10-06
CA3014353A1 (en) 2017-08-24
US20170236024A1 (en) 2017-08-17
AU2017221222B2 (en) 2022-04-21
WO2017142788A1 (en) 2017-08-24
AU2017221222A1 (en) 2018-08-23
AU2022206780A1 (en) 2022-08-18
US11417081B2 (en) 2022-08-16
EP3403050A1 (en) 2018-11-21

Similar Documents

Publication Publication Date Title
US11417081B2 (en) Automated system and methodology for feature extraction
US11686849B2 (en) Augmented three dimensional point collection of vertical structures
US11416644B2 (en) Supervised automatic roof modeling
KR101933216B1 (en) River topography information generation method using drone and geospatial information
US9958269B2 (en) Positioning method for a surveying instrument and said surveying instrument
Stone et al. Alternatives to LiDAR-derived canopy height models for softwood plantations: a review and example using photogrammetry
JP2021179839A (en) Classification system of features, classification method and program thereof
Mweresa et al. Estimation of tree distribution and canopy heights in ifakara, tanzania using unmanned aerial system (UAS) stereo imagery
Li et al. Registration of aerial imagery and lidar data in desert areas using the centroids of bushes as control information.
Hese et al. UAV based multi seasonal deciduous tree species analysis in The Hainich National Park using multi temporal and point cloud curvature features
CN110617800A (en) Emergency remote sensing monitoring method, system and storage medium based on civil aircraft
TWI597405B (en) System and method for monitoring slope with tree displacement
Holderman Advancing Low-cost Mobile Remote Sensing Technologies for Forest Resource Management
Osborn et al. PHOTOGRAMMETRY FOR FOREST INVENTORY

Legal Events

Date Code Title Description
AS Assignment

Owner name: HPS INVESTMENT PARTNERS, LLC,, NEW YORK

Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:PICTOMETRY INTERNATIONAL CORP.;REEL/FRAME:046823/0755

Effective date: 20180814

AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT, NEW YORK

Free format text: FIRST LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:PICTOMETRY INTERNATIONAL CORP.;REEL/FRAME:046919/0065

Effective date: 20180814

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL

Free format text: FIRST LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:PICTOMETRY INTERNATIONAL CORP.;REEL/FRAME:046919/0065

Effective date: 20180814

AS Assignment

Owner name: PICTOMETRY INTERNATIONAL CORP., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, YANDONG;GIUFFRIDA, FRANK;REEL/FRAME:047192/0872

Effective date: 20170329

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4