WO2022082007A1 - Method and system for automated debris detection - Google Patents

Method and system for automated debris detection Download PDF

Info

Publication number
WO2022082007A1
WO2022082007A1 PCT/US2021/055227 US2021055227W WO2022082007A1 WO 2022082007 A1 WO2022082007 A1 WO 2022082007A1 US 2021055227 W US2021055227 W US 2021055227W WO 2022082007 A1 WO2022082007 A1 WO 2022082007A1
Authority
WO
WIPO (PCT)
Prior art keywords
debris
score
region image
property
parcel
Prior art date
Application number
PCT/US2021/055227
Other languages
French (fr)
Inventor
Giacomo VIANELLO
Robert Davis
John K. CLARK
Jonathan M. FISCHER
Original Assignee
Cape Analytics, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cape Analytics, Inc. filed Critical Cape Analytics, Inc.
Publication of WO2022082007A1 publication Critical patent/WO2022082007A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • This invention relates generally to the computer vision field, and more specifically to a new and useful system and method for automated debris detection.
  • FIGURE 1 depicts a schematic representation of a variant of the method.
  • FIGURE 2 depicts a schematic representation of a variant of the system.
  • FIGURE 3 depicts a variant of the method.
  • FIGURE 4 depicts a variant of the method.
  • FIGURE 5 depicts an illustrative representation of an example of the method.
  • FIGURE 6 depicts an embodiment of the method.
  • FIGURE 7 is an illustrative example of training a debris model.
  • the method io for automatic debris detection can include: determining a region image Sioo; optionally determining a parcel representation for the region image S200; generating a debris representation using the region image S300; optionally generating a debris score based on the debris representation S400; optionally monitoring the debris score over time S500; and optionally returning debris information S600.
  • the method can additionally or alternatively include any other suitable elements.
  • the system 20 for automatic debris detection can include one or more computing systems 100, one or more datastores 200, and/or any other components.
  • the system and method function to identify and analyze (e.g., score) a property’s debris (e.g., yard debris, parking structure debris, and/or any other debris on a property).
  • the property can be: residential property (e.g., homes), commercial properties (e.g., industrial centers, forest land, quarries, etc.), plots, parcels, and/or any other suitable property class.
  • the resultant debris information (e.g., debris score, debris representation, debris parameters, etc.) can be used as an input in one or more property models, such as an automated valuation model, a property loss model, and/or any other suitable model.
  • the debris information can be used for property inspection (e.g., automatically sent to an inspector, such as when the value of the debris score is above or below a predetermined threshold). However, the debris information can be otherwise used.
  • the system and method can include receiving a request including: a property identifier, such as coordinates, an address, an image (e.g., region image), a specified built structure and/ or other specified property feature from a user device and/or API; optionally determining a region image associated with the property identifier (e.g., that depicts the parcel associated with the property identifier and optionally other parcels or parts of parcels); determining a debris representation (e.g., potential debris heatmap, such as values between o-i, debris bounding boxes, etc.) using a debris representation module; and optionally determining a debris score using the debris representation.
  • a property identifier such as coordinates, an address, an image (e.g., region image), a specified built structure and/ or other specified property feature from a user device and/or API
  • determining a region image associated with the property identifier e.g., that depicts the parcel associated with the property identifier and optionally other parcels or parts of parcels
  • determining the debris representation can include: determining a foreground heatmap from the region image; optionally masking the heatmap with parcel boundaries associated with the property identifier (e.g., when the region image is not restricted to the parcel); and removing heatmap segments associated with known classes (e.g., roofs, vegetation, shadows, pools, driveways, etc.; determined by other classifiers, etc.).
  • the debris scores e.g., debris square area, debris percentage of parcel, etc.
  • the debris representation can be directly determined by a debris model trained to identify (e.g., classify, segment) debris within an image.
  • the debris model can additionally or alternatively be trained to generate the debris score.
  • the debris model can be trained on debris segments (e.g., identified using the first specific example, example shown in FIGURE 7), the debris scores (e.g., determined using the first specific example), and/or otherwise trained.
  • the method and system can enable automatic identification of accumulations of exterior debris at one or more specific points in time, which can improve underwriting decisions and create opportunities to proactively address condition issues before they result in insurance claims or other condition-related losses.
  • variants of the system and method avoid expensive data collection techniques, such as in-person inspections and/or buying expensive datasets. Instead, the system and method enables scalable debris analysis by using remote imagery (e.g., satellite, aerial, drone imagery, etc.).
  • variants of the system and method return up-to-date results by using the most recent imagery for every request (e.g., collected within the last day, the last week, within the last month, within the last 3-6 months, etc.). The system and method can additionally return results for imagery taken for multiple different points in time.
  • debris identification can be more accurate when performed on a wide-view image of the area of interest, instead of identifying debris solely based on the image segment associated with the parcel. This is because the parcel’s image segment can cut off portions of known, non- -debris objects (e.g., portions of a tree), thereby resulting in said objects being erroneously classified as debris.
  • the inventors have discovered that, because there is a low number of properties with yard debris and because there is a wide variety of debris that can possibly occur, the yard debris dataset can be too sparse to adequately train a debris classifier, and potentially be biased (e.g., against certain socioeconomic demographics).
  • the system and method can still reliably detect yard debris by detecting and removing non-debris objects from foreground objects detected in the property image, and treating the remaining foreground objects as debris.
  • Debris can include: junk cars (e.g., vehicles surrounded by debris, on the parcel but not in the driveway, etc.), appliances, construction materials, disorganized stacks of boxes, man-made piles of indeterminate composition, and/or any other debris.
  • junk cars e.g., vehicles surrounded by debris, on the parcel but not in the driveway, etc.
  • appliances construction materials
  • disorganized stacks of boxes e.g., man-made piles of indeterminate composition
  • man-made piles of indeterminate composition e.g., man-made piles of indeterminate composition
  • Non-debris can include property features: structures (e.g., primary structure, such as the primary residence, commercial building, etc.; roofs; etc.); secondary structure, such as an additional dwelling unit, shed, garage, paved surfaces, water features, furniture (e.g., patio and/or lawn chairs, hammock, trampoline, etc.), vegetation excluding fallen trees (e.g., maintained gardens, grass, trees, etc.), and/or any other non- debris.
  • Non-debris can exclude any features that do not have a positive height (e.g., holes, pools, etc.) or are themselves surfaces (e.g., driveways, sidewalks, patios, decks, etc.), but can additionally or alternatively include said features. 4. System.
  • the method is preferably performed using the system 20, including: one or more computing systems 100, one or more datastores 200, and/or any other suitable components.
  • the computing system can include a remote computing system (e.g. one or more servers), a user device (e.g., smartphone, laptop, desktop, etc.), and/or other computing system.
  • the computing system can be used with a user interface, wherein the computing system can receive an identifier from the user device which can be used to retrieve a region image from the datastore. Additionally or alternatively, the computing system can receive a region image from the datastore in response to the datastore receiving an identifier from the user device.
  • the computing system can include a remote computing system and a user device that interfaces with the remote computing system via an API.
  • the computing system can include a remote computing system that interfaces with a third-party via an API.
  • the computing system can include one or more modules.
  • the one or more modules can include: a debris module, a parcel representation module, a set of non-debris modules, and/or any other suitable module.
  • the debris module functions to identify debris segments within an image, and can optionally analyze the debris segments (e.g., determine a debris score based on the debris segments).
  • the debris module can include (and/or be split into) a debris representation module (e.g., to determine one or more debris representations), a debris score module, and/or any other suitable debris module.
  • the parcel representation module functions to determine one or more parcel representations (e.g., parcel boundaries, parcel masks, etc.) of a property parcel (e.g., land parcel).
  • Each module can include one or more: classification models, neural networks, regression models, segmentation models, sets of equations, sets of heuristics or rules, and/or be otherwise constructed.
  • classification models can be separate from the modules.
  • the modules can receive as input the region image, optionally depth information (e.g., a 3D point cloud, a digital surface map, a digital elevation map, etc.), and/or any other suitable information of the region.
  • the modules preferably output one or more representations, but can additionally or alternatively output one or more classes (e.g., for the image, for a pixel, for an image segment, etc.), one or more scores, and/or any other suitable information.
  • the representation is preferably a heatmap, but can additionally or alternatively be a mask, one or more segments, one or more boundaries, a score, and/or any other information.
  • the modules can include other model types.
  • the modules can include machine learning models, sets of rules, heuristics, and/or any other suitable model.
  • the modules can be neural networks (e.g., DNN, CNN, RNN, etc.), decision trees, SVMs, regressions, Naive Bayes, clustering algorithms (e.g., k- nearest neighbors, k-means, etc.), and/ or any other suitable machine learning model.
  • the modules can be semantic segmentation models, instance-based segmentation models, object detection models (e.g., YOLO), and/or any other segmentation model.
  • the modules can be binary classifiers (e.g., roof vs background, ground vs. non-ground, shadow vs. non-shadow, vegetation vs.
  • the method can: use the same trained segmentation model in all contexts, selectively use the trained segmentation model based on the location context, and/or otherwise use the trained segmentation model.
  • location context include: location information (e.g., city, neighborhood, street, etc.); zoning; developed environment class (e.g., urban, suburban, rural, exurban, etc.); average distance between buildings (e.g., determined based on the parcel data); debris exceeding a predetermined percentage threshold; predetermined object presence in the image; and/or other contextual parameters.
  • the modules can be trained using supervised learning (e.g., trained using training data), unsupervised learning, semi-supervised learning, and/or any otherwise trained.
  • the training data can be generated by the system and method and/ or generated separately.
  • the system can be used with region imagery depicting all or a portion of a parcel, multiple parcels, and/ or any other suitable imagery.
  • the system can include or be used with one or more datastores, which can store the region imagery, parcel representations, and/or any other information.
  • the datastore can be queried to retrieve the region imagery, parcel representations, and/or any other suitable information used to perform the method.
  • the query can include geographic coordinates, an address, and/or any other property identifier (e.g., used to identify a parcel and/ or group of parcels).
  • the region imagery, parcel representation, non-debris representation, and/or other information can be associated with a property identifier, can be georeferenced (e.g. associated with one or more geographic coordinates), and/or associated with any other suitable information.
  • the property identifier can include: an address, a set of geographic coordinates, a landmark name, a lot number, a parcel number, a location identifier (e.g., Google Plus CodesTM, GeohashesTM, Place KeyTM, etc.), and/or any other suitable location identifier.
  • the system can include any other suitable components.
  • the method for automatic debris detection can include: determining a region image S100; optionally determining parcel representations associated with the property identifier S200; generating a debris representation using the region image S300; optionally generating a debris score based on the debris representation S400; optionally monitoring the debris information over time S500; and optionally returning debris information S600.
  • the method can additionally or alternatively include any other suitable elements.
  • the method can be performed for one or more properties (e.g., in parallel, in series, etc.).
  • properties can include: a parcel of land, a built structure (e.g., a building, a pool, accessory structures such as decks, etc.), and/ or any other suitable property.
  • the method can be performed for a single property, identified in a request.
  • the method can be performed for a plurality of properties (e.g., in a batch), wherein a different instance of the method is applied to each property within the plurality (e.g., property appearing in a region image, property identified in a list, etc.).
  • the resultant debris information can be stored in association with the property identifier for the respective property.
  • the method is preferably performed by the system discussed above, but can be otherwise performed.
  • All or portions of the method can be performed when a debris information request is received for one or more properties, when a new region image is received, and/or at any other suitable time.
  • the debris information can be determined in response to the request, be pre-calculated, and/or calculated at any other suitable time.
  • the debris information can be returned (e.g., sent to the user) in response to the request.
  • Determining a region image S100 can function to determine an image for the debris model.
  • the region image can depict a property feature, debris, and/ or any other elements.
  • the region image can optionally be determined based on a received property identifier (e.g., address, location, latitude and longitude coordinates, etc.), such as received from a user device or an API.
  • the region image can be received from the datastore that stores region imagery and/ or retrieved from the datastore, such as using a property identifier.
  • the region image can be imagery of the parcel associated with the property identifier, imagery of multiple parcels that include the parcel associated with an property identifier, and/or any other image.
  • the region image can be remote imagery (e.g., aerial imagery, etc.), be crowdsourced for a geographic region, or other imagery.
  • Remote imagery can include imagery captured using: drones; aircraft (e.g., fixed-wing, rotary-wing, etc.); balloons; satellites; terrestrial vehicles; user devices (e.g., smartphones, augmented reality devices, glasses, etc.); and/ or otherwise captured.
  • the region image can depict a geographic region larger than a predetermined area threshold (e.g., average parcel area, manually determined region, image-provider-determined region, etc.), a large-geographic-extent (e.g., multiple acres that can be assigned or unassigned to a parcel), encompass one or more regions, such as parcels, and/ or any other suitable geographic region.
  • the region image can include top-down views of the region (e.g., nadir images, panoptic images), but can additionally or alternatively include views from other angles (e.g., oblique imagery, street view imagery) and/or other views.
  • the region image is preferably 2D, but can additionally or alternatively be 3D and/or have any other suitable dimension.
  • the region image can be associated with depth information (e.g., terrain information, property feature information, etc.), and/ or other information or data.
  • the region images can be red-green-blue (RGB), hyperspectral, multispectral, black and white, IR, NIR, UV, and/ or captured using any other suitable wavelength.
  • the region image is preferably orthorectified, but can be otherwise processed.
  • the region image can additionally or alternatively include any other suitable characteristics.
  • the region image can be associated with geographic data; time data (e.g., recurrent time, unique timestamp); and/or other data.
  • the region imagery is preferably pixel-aligned with geographic coordinates, but can be offset, aligned within a threshold margin of error, or otherwise aligned.
  • Examples of geographic data can include: a geolocation (e.g., of an image centroid, such as geographic coordinates); a geographic extent (e.g., area, range of geographic coordinates, etc.); municipal labels (e.g., set of addresses, a set of parcel identifiers or APNs, counties, neighborhoods, cities, etc.); and/or other geographic data.
  • the method can include receiving an address from a user device and using the address to retrieve a region image (e.g., from a third-party API, from a database of region images, etc.), wherein the region image encompasses the property identified by the address.
  • a region image e.g., from a third-party API, from a database of region images, etc.
  • the region image can be determined in response to: determining a geographic descriptor associated with the property identifier, transmitting the descriptor to a third-party and receiving the region image associated with the geographic descriptor.
  • the geographic descriptor can include: a geographic coordinate (e.g., determined using conventional geocoding methods), a parcel identifier, a municipal identifier (e.g., determined based on the ZIP, ZIP+4, city, state, etc.), or other descriptor.
  • the region image can be determined in response to querying the datastore, wherein the query can include the address and/or the geographic descriptor.
  • Sioo includes: receiving the property identifier, determining a set of geolocations (e.g., a geofence, the parcel boundaries, etc.) associated with the property identifier, retrieving images encompassing the geolocations, and optionally stitching the images together to form the region image.
  • the images can be: contemporaneously sampled images of the geolocations (e.g., sampled in the same pass), wherein the geolocations do not appear in the same image frame; images from multiple timepoints (e.g., different times of day, different days of the week, etc.), wherein different portions of the property are occluded in each image; and/ or otherwise related.
  • the region image can be otherwise determined.
  • Determining a parcel representation for the region image S200 can function to determine a subregion associated with the property’s parcel and/or surrounding a property feature, such that the debris analysis is limited to the property’s debris.
  • the parcel representations can be used to segment the region image (e.g., before S300, such that the region image only represents the parcel), used to mask the debris representation, used to calculate the debris score (e.g., when the debris score is representative of the proportion of a parcel covered by debris), and/or otherwise calculated.
  • the parcel representation preferably represents the property parcel, but can additionally or alternatively represent a geographic region associated with a property or built structure (e.g., region surrounding the built structure, patio associated with the built structure, etc.), represent a larger or smaller geographic region, and/ or represent any other suitable region.
  • a geographic region associated with a property or built structure e.g., region surrounding the built structure, patio associated with the built structure, etc.
  • the parcel representations can include polygon geometries, masks, geofences, values, and/or any other suitable information.
  • the parcel representations can be binary masks and/ or any other labelled mask.
  • the parcel representations can include: a parcel boundary mask, a vegetation mask, shadow masks, a primary structure mask (e.g., primary residence and/or primary building), secondary structure masks (e.g., guest house, pool house, gazebo, shed, any other structure separate from the primary structure, and/or not separate from the primary structure, such as a garage, attached shed, etc.), parcel feature masks (e.g., pool, court, driveway, vegetation, etc.), and/or any other suitable mask.
  • the parcel representations can be determined after S300, concurrently with S300 (e.g., using the same or different image or data source), before S300, and/or determined at any other time.
  • the parcel representations can be determined using the parcel representation module (e.g., from municipal parcel data, architectural diagrams, etc.).
  • the parcel representations can be retrieved from the datastore (e.g., the parcel representations can be predetermined and stored in the datastore).
  • the parcel representation can be defined by an area whose boundary is a predetermined distance from (and/or geofence around) a predetermined location (e.g., primary structure, parcel centroid, etc.) (e.g., as shown in FIGURE 6).
  • the predetermined distance can be: manually specified, determined based on computational constraints, and/or otherwise determined.
  • the parcel boundary representation can be determined based on the parcel area.
  • a subset of the parcel can be used as the parcel boundary (e.g., within a predetermined distance of the centroid of the primary structure, such as within 10 feet, 20 feet, 30 feet, 50 feet, etc.).
  • the cropped parcel boundary can be circular, square, rectangular, irregular, and/or any other suitable shape.
  • the parcel representations can be otherwise determined.
  • Generating a debris representation using the region image and the parcel representations S300 can function to determine debris and/or non- debris segments in the region image.
  • the debris representation is preferably a mask, but can additionally or alternatively be a heatmap, set of segments (e.g., image segments), a semantic segmentation (e.g., with pixel blobs associated with a debris tag), a set of bounding boxes around the debris (e.g., polygons, rectangles, etc.), and/or any other representation.
  • the debris representation can be determined after determining the region image, optionally after determining the parcel representations, concurrently with determining the parcel representations, before determining the parcel representations, and/or performed at any other time.
  • the debris representation is preferably determined by the debris model, but can be additionally or alternatively determined by any other model, rule, heuristic, or otherwise determined.
  • the debris model is preferably a binary classifier (e.g., outputting the probability of whether a pixel represents debris or not), but can alternatively be a multiclass classifier (e.g., classifying each pixel with a debris type), an object detector (e.g., trained to detect yard debris), and/or other model.
  • a binary classifier e.g., outputting the probability of whether a pixel represents debris or not
  • a multiclass classifier e.g., classifying each pixel with a debris type
  • an object detector e.g., trained to detect yard debris
  • the debris representation can include labels for non- debris objects (e.g., a separate label for roof, pool, etc.).
  • the non-debris objects can be determined by the debris model, by a separate non-debris model (e.g., a version of the debris model trained to label non-debris objects), and/ or any other suitable model.
  • the non-debris segments can optionally be pre-determined and retrieved from the datastore (e.g., calculated during S200), determined in parallel with debris segmentation, and/or otherwise determined.
  • the debris module can receive an input region image and/or depth information that includes multiple parcels (e.g., determined in S100).
  • the output of the debris module can include a mask of debris segments across the multiple parcels (e.g., each pixel is assigned a 1 if it is debris and o otherwise).
  • the debris module outputs a heatmap with debris probabilities assigned per pixel, which is then thresholded (e.g., at a predetermined probability cutoff) to generate the mask.
  • the parcel segment can optionally be isolated from this mask using the parcel representation.
  • the debris module can receive as input a region image and/or depth information of the associated parcel area (e.g., determined by combining the region image and the parcel boundary representation determined in S200, such as by masking out all areas of the region image excluding the parcel area defined by the parcel boundaries).
  • the output of the debris module can include a binary debris mask (e.g., each pixel is assigned a 1 if it is debris and o otherwise) and/or any other suitable representation.
  • the debris representation can be cooperatively formed from representations of one or more debris classes.
  • one or more debris modules can identify (e.g., segment, semantically segment) one or more classes of debris (e.g., trash, fallen vegetation, engines, rusted cars, etc.).
  • the segments output by the debris modules can be aggregated to form the debris representation.
  • the debris module can determine a foreground map by segmenting ground pixels from non-ground pixels (e.g., using background / foreground segmentation), wherein the non-ground pixels can represent debris.
  • the foreground map can be determined based on the region image, depth information, and/or any other suitable information.
  • the debris representation (e.g., output by the fourth variant) can be modified to remove non-debris objects, such as to label and/or correct debris and/or non-debris classifications in the debris representation (e.g., when a ground classifier is used as the debris model, to remove non-debris objects, etc.) and/or provide any other functionality.
  • the modified representation can be determined by subtracting parcel masks or other masks (e.g., vegetation masks, shadow masks, primary structure mask, secondary structure masks, and/or other parcel feature masks, etc.) retrieved from storage or determined by the computing system (e.g., from the same or contemporaneously-sampled images) from the debris representation, wherein S400 is performed on the resultant representation. This can be performed before or after parcel region isolation.
  • parcel masks or other masks e.g., vegetation masks, shadow masks, primary structure mask, secondary structure masks, and/or other parcel feature masks, etc.
  • the method can include: determining a foreground map based on the region image using a first model of the computing system (e.g., a first classification model); determining set of non-debris representations within the region image using a set of non-debris models (e.g., secondary classification models); and identifying debris within the region image.
  • a first model of the computing system e.g., a first classification model
  • determining set of non-debris representations within the region image e.g., secondary classification models
  • identifying debris within the region image.
  • the set of non-debris representations can include masks (e.g., feature masks), maps, heatmaps, segments, and/or other representations associated with one or more non-debris classes of objects (e.g., non-debris features).
  • the non-debris representations can include: built structure maps, vegetation maps, pool maps, umbrella maps, and/or other spatial representations of non-debris.
  • the non-debris representations can be determined by: a single non-debris module (e.g., configured to identify a predetermined set of non-debris), a different non-debris module for each non-debris class (e.g., a built structure classifier, a vegetation classifier, etc.), and/or any other suitable set of modules.
  • the non-debris representation can be determined based on a region image.
  • the non-debris region image can be the region image from Sioo, the same region image used to determine the debris representation, a different region image from Sioo or debris representation determination, a region image of the same region or depicting the same property, and/or be any other suitable region image.
  • the non-debris representation can be retrieved from a database, or otherwise determined.
  • the non-debris representation can be determined as part of S300, determined before S300, and/or otherwise determined.
  • Identifying debris in the region image can include: optionally isolating the parcel in the foreground map using the parcel representation (e.g., masking out parcels of the foreground map not represented by the parcel representation), and removing the non-debris features (e.g., property features, etc.) by subtracting the non-debris representation (e.g., non-debris feature mask(s)) from the foreground map (e.g., relabelling the pixels as background, masking out the pixels associated with the non-debris feature mask(s)), wherein the remaining foreground segments (e.g., pixels) can be representative of debris.
  • the parcel representation e.g., masking out parcels of the foreground map not represented by the parcel representation
  • the non-debris features e.g., property features, etc.
  • Identifying debris can optionally include removing foreground segments (e.g., debris segments) satisfying a set of conditions (e.g., smaller than a threshold area, having less than a threshold dimension, etc.). However, the debris can be otherwise identified in the region image.
  • foreground segments e.g., debris segments
  • a set of conditions e.g., smaller than a threshold area, having less than a threshold dimension, etc.
  • the depth information can be used to classify pixels as foreground (e.g., debris) or background (e.g., non-debris).
  • the depth information can be used to determine ground pixels in a region image.
  • the depth information can be used to determine vegetation and/ or property features in a region image (e.g., using heuristics, algorithms, and/ or rules for processing the depth information).
  • the debris representation can be determined from the depth information, wherein the debris representation can be 3D (e.g., include a 2D footprint and height).
  • the depth information can be intersected with the 2D debris representation (e.g., the depth information can be masked with the 2D debris representation, etc.), to identify the points, voxels, mesh cells, and/or other depth units associated with the 2D debris segments.
  • the depth information can be otherwise used.
  • certain classes of debris can be selectively excluded from the debris representation. These variants can leverage debris modules that can identify (e.g., segment, semantically segment) a given debris class, wherein the output segments can be removed from the overall debris representation. Alternatively, the results of those debris modules can be omitted from the debris representation (e.g., in variants where the debris representation is aggregated from multiple debris classes).
  • Certain debris classes can be excluded when: the property is associated with the debris class (e.g., the property is a car junkyard, the property is a used car lot, the property is an outdoor engine storage space, etc.), when a user requests that the debris class be omitted, or when any other suitable exclusion condition is met.
  • the property-debris class association can be determined: manually, from a business listing, from the parcel zoning, the property zoning (e.g., single family home, multi -family home, commercial, etc.), and/or otherwise determined.
  • the debris representation can be otherwise determined.
  • the method can optionally include determining debris parameters.
  • debris parameters can include: debris composition, debris location, debris size, debris volume, debris class, debris density, debris temperature, debris formation rate, and/ or any other suitable parameter.
  • the debris parameters can be determined from the debris representation, from the depth information (e.g., masked with the debris representation), and/or otherwise determined.
  • the debris area can be calculated from the size of the debris representation.
  • the debris volume can be determined from the depth map or point cloud segment overlapping the debris representation.
  • the debris parameters can be otherwise determined.
  • the method can optionally include determining a debris class for the debris.
  • the debris class can be determined from the debris representation, from the depth map, and/or from any other suitable information.
  • the debris class is preferably determined by a model trained to classify the debris (e.g., debris classifier), but can be manually or otherwise determined.
  • the model can be a binary model (e.g., specific to a debris class), a multiclass model (e.g., trained to classify the debris as one or more of a set of potential debris classes), and/ or any other suitable model.
  • Examples of debris classes can include: flammable / nonflammable; easily mitigated/ not easily mitigated; composition (e.g., vegetation, machinery, metal, plastic, etc.), and/or any other suitable class.
  • any other suitable debris information can be determined.
  • the method can optionally include generating a debris score using the debris representation S400 can function to determine an amount of debris on a parcel.
  • the debris score can be stored in the datastore (e.g., to enable score monitoring in S500) or not stored.
  • the debris score can optionally be presented to a user.
  • the debris score can represent an area of the parcel covered by debris, the area of the parcel covered by debris divided by the parcel size (e.g., percent of the parcel covered by debris; percent of the parcel representation intersecting the debris representation; etc.), the area surrounding a structure of a predetermined radius that is covered by debris, the volume of debris (e.g., determined from a 3D debris representation), ratio of the primary building or built structure area to the debris area, proximity of the debris to a built structure (e.g., proximity of any debris to a primary building, proximity of the debris centroid to the built structure, etc.), the typicality of the property’s amount of debris for the property’s area (e.g., normalized by the debris scores for other properties in the neighborhood), the quality of the debris (e.g., level of degradation, type of debris, etc.), the location of the debris (e.g., driveway debris may result in a higher score than backyard debris; determined by intersecting the debris representation with a driveway representation determined by a driveway module, etc.), the debris
  • the debris score can be a percentage (e.g., of the parcel, of the unbuilt square footage); a measurement (e.g., total square area, number of debris piles, etc.); distribution metric (e.g., based on proximity to the primary structure); a binary score (e.g., in or on a property feature or not, such as inside a water feature, on a paved surface, etc.); a classification (e.g., high amount of debris, medium amount of debris, low amount of debris, etc.), a comparison to a reference area score (e.g., the difference to a reference score, a percentile for the reference area, etc.), wherein the reference area can be: neighborhood, geofenced area, town, city, county, state, province, country; a proximity to a built structure and/or other property feature, and/or any other suitable value.
  • the debris score is preferably generated using the debris representation (e.g., including only the parcel region; including multiple parcel regions; etc.), the parcel representation (e.g
  • the debris score can be calculated by converting the area of the debris, represented by the modified and/or original debris representation, into a measurement (e.g., square footage, square meters, etc.). This can be done based on a known square-area to pixel mapping, or otherwise determined.
  • the debris score can be calculated by converting the area of the debris represented by the modified and/or original debris representation into a measurement and dividing the amount by the size of the parcel.
  • the debris score can be calculated by converting the area labelled as ground into the debris score.
  • non- debris objects can be removed from the debris representation before calculating the debris score.
  • parcel representations can be combined with the debris representation to more accurately calculate the debris score (e.g., based on the debris representation and one or more parcel representations).
  • a mask of a property feature (e.g., calculated in S300) can be used to determine an instance of the property feature in the region image and/or foreground map, and the debris score can be calculated based on the debris within the area of the region image defined by the mask of the property feature.
  • the property feature can be a water feature on the parcel and the debris score can be calculated for the debris in the water feature (e.g., debris in a pool, debris in a lake, debris in a fountain, debris in a pond, etc.).
  • the property feature can be a paved surface (e.g., driveway, walkway, etc.) and the debris score can be calculated based on the debris that is on the paved surface.
  • a paved surface e.g., driveway, walkway, etc.
  • the debris representation can be fed as input into a classification model of the computing system and the debris score is output by the classification model.
  • the classification model can be trained on a set of debris representations, each associated with a debris score.
  • the classification model can be a regression model.
  • the debris score can be otherwise determined.
  • the method can optionally include monitoring the debris information over time S500, which can function to determine changes to the property and/or debris over time.
  • the monitored debris information can include: the debris score, the debris representation, the debris parameters, and/or other information.
  • the changes can be quantified (e.g., rate of change over time), and can be used in property analyses or otherwise used.
  • the changes can include: more debris accumulation over time, less debris accumulation over time, debris relocation over time (e.g., relocation of the same debris instance, relocation of different debris classes, relocation of a centroid of the debris, etc.), change in debris composition over time, debris residency duration (e.g., how long the debris remains on the parcel, which can be indicative of built structure construction or modification), and/ or any other suitable set of changes.
  • S500 can include redetermining (e.g., recalculating) the debris information
  • the debris information can be re-calculated periodically (e.g., every month, every 3 months, every 6 months, every 12 months, every new image for the property, etc.), or determined at any other suitable frequency.
  • the debris information can be sent to the user in response to an API query, recorded in the database associated with as an entry associated property identifier, sent to a third party system, and/ or otherwise used.
  • the debris information can be monitored using one or more time series processes to determine historical change over time, to predict accumulation of debris over time, and/ or otherwise used.
  • the time series processes can include: recurrent neural networks, autoregression, moving average, autoregressive moving average, autoregressive integrated moving-average, seasonal autoregressive integrated moving-average, vector autoregression, vector autoregression moving-average, exponential smoothing, and/ or any other suitable process.
  • the debris information can be otherwise monitored.
  • the method can optionally include returning the debris information S600, which can function to provide debris information to an endpoint (e.g., the requesting endpoint).
  • S600 can function to return the debris information in response to a request, store the debris information in the datastore for future retrieval, and/or provide any other functionality.
  • the debris information can include: the debris score (e.g. determined in S400 and/or S500), the debris representation (e.g., determined in S300), the debris class (e.g., wherein the debris representation and/or image segment corresponding to debris is fed to a debris classification model), the debris parameters, and/or other debris information.
  • the debris information can be presented as part of a code block (e.g., data returned in response to an API request), presented in a user interface (e.g., in a user facing application, such as on a user device), and/ or otherwise returned.
  • the debris information can be automatically returned in response to an event, or returned at any other suitable time.
  • the event can include the debris score falling above or below a predetermined threshold, a re-calculation of the debris score (e.g., when updated imagery is received by a system for properties), receipt of a debris information request, and/ or any other suitable event.
  • the debris information can be automatically returned to a third-party system (e.g., to update models and/ or model outputs that are generated based on the debris score), to the datastore, to a third-party datastore, to a user, and/or to any other suitable entity. However, the debris information can be otherwise returned.
  • a third-party system e.g., to update models and/ or model outputs that are generated based on the debris score
  • the debris information can be otherwise returned.
  • Different processes and/or elements discussed above can be performed and controlled by the same or different entities. In the latter variants, different subsystems can communicate via: APIs (e.g., using API requests and responses, API keys, etc.), requests, and/or other communication channels.
  • FIG. 1 Alternative embodiments implement the above methods and/or processing modules in non-transitory computer-readable media, storing computer- readable instructions that, when executed by a processing system, cause the processing system to perform the method(s) discussed herein.
  • the instructions can be executed by computer-executable components integrated with the computer-readable medium and/or processing system.
  • the computer-readable medium may include any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, non-transitory computer readable media, or any suitable device.
  • the computer-executable component can include a computing system and/or processing system (e.g., including one or more collocated or distributed, remote or local processors) connected to the non-transitory computer- readable medium, such as CPUs, GPUs, TPUS, microprocessors, or ASICs, but the instructions can alternatively or additionally be executed by any suitable dedicated hardware device.
  • a computing system and/or processing system e.g., including one or more collocated or distributed, remote or local processors
  • the instructions can alternatively or additionally be executed by any suitable dedicated hardware device.
  • Embodiments of the system and/ or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/ or using one or more instances of the systems, elements, and/or entities described herein.

Abstract

In variants, the method for automatic debris detection includes: determining a region image; optionally determining a parcel representation for the region image; generating a debris representation using the region image; generating a debris score based on the debris representation; and optionally monitoring the debris score over time.

Description

METHOD AND SYSTEM FOR AUTOMATED DEBRIS DETECTION
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application serial number 63/092,283, filed on 15-OCT-2O2O, which is incorporated in its entirety by this reference.
TECHNICAL FIELD
[0002] This invention relates generally to the computer vision field, and more specifically to a new and useful system and method for automated debris detection.
BACKGROUND
[0003] Property conditions, especially the existence of debris, is difficult to assess in the insurance underwriting process. Insurance carriers often quote and bind policies, only to learn about condition issues upon an on-site physical inspection or, worse, following a claim. Walk-around inspections may reveal accumulated debris on the premises, presenting fire hazards, liability hazards, and visually undesirable objects. Insurance underwriting and other property valuation decisions are typically made with the best information at hand, but issues with property condition are not well represented in traditional data sources and typically are not discovered until the property is inspected, if at all.
[0004] Thus, there is a need for a new and useful system and method for automatic debris detection.
BRIEF DESCRIPTION OF THE FIGURES
[0005] FIGURE 1 depicts a schematic representation of a variant of the method.
[0006] FIGURE 2 depicts a schematic representation of a variant of the system.
[0007] FIGURE 3 depicts a variant of the method.
[0008] FIGURE 4 depicts a variant of the method.
[0009] FIGURE 5 depicts an illustrative representation of an example of the method.
[0010] FIGURE 6 depicts an embodiment of the method.
[0011] FIGURE 7 is an illustrative example of training a debris model. DETAILED DESCRIPTION
[0012] The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention.
1. Overview.
[0013] As shown in FIGURE 1, the method io for automatic debris detection can include: determining a region image Sioo; optionally determining a parcel representation for the region image S200; generating a debris representation using the region image S300; optionally generating a debris score based on the debris representation S400; optionally monitoring the debris score over time S500; and optionally returning debris information S600. However, the method can additionally or alternatively include any other suitable elements.
[0014] As shown in FIGURE 2, the system 20 for automatic debris detection can include one or more computing systems 100, one or more datastores 200, and/or any other components.
[0015] The system and method function to identify and analyze (e.g., score) a property’s debris (e.g., yard debris, parking structure debris, and/or any other debris on a property). The property can be: residential property (e.g., homes), commercial properties (e.g., industrial centers, forest land, quarries, etc.), plots, parcels, and/or any other suitable property class.
[0016] The resultant debris information (e.g., debris score, debris representation, debris parameters, etc.) can be used as an input in one or more property models, such as an automated valuation model, a property loss model, and/or any other suitable model. The debris information can be used for property inspection (e.g., automatically sent to an inspector, such as when the value of the debris score is above or below a predetermined threshold). However, the debris information can be otherwise used.
2. Examples.
[0017] In a first example, the system and method can include receiving a request including: a property identifier, such as coordinates, an address, an image (e.g., region image), a specified built structure and/ or other specified property feature from a user device and/or API; optionally determining a region image associated with the property identifier (e.g., that depicts the parcel associated with the property identifier and optionally other parcels or parts of parcels); determining a debris representation (e.g., potential debris heatmap, such as values between o-i, debris bounding boxes, etc.) using a debris representation module; and optionally determining a debris score using the debris representation.
[0018] In a first specific example, determining the debris representation can include: determining a foreground heatmap from the region image; optionally masking the heatmap with parcel boundaries associated with the property identifier (e.g., when the region image is not restricted to the parcel); and removing heatmap segments associated with known classes (e.g., roofs, vegetation, shadows, pools, driveways, etc.; determined by other classifiers, etc.). The debris scores (e.g., debris square area, debris percentage of parcel, etc.) can be determined based on the remaining heatmap segments. [0019] In a second specific example, the debris representation can be directly determined by a debris model trained to identify (e.g., classify, segment) debris within an image. The debris model can additionally or alternatively be trained to generate the debris score. The debris model can be trained on debris segments (e.g., identified using the first specific example, example shown in FIGURE 7), the debris scores (e.g., determined using the first specific example), and/or otherwise trained.
3. Benefits.
[0020] The method and system can confer several benefits over conventional systems.
[0021] First, the method and system can enable automatic identification of accumulations of exterior debris at one or more specific points in time, which can improve underwriting decisions and create opportunities to proactively address condition issues before they result in insurance claims or other condition-related losses.
[0022] Second, variants of the system and method avoid expensive data collection techniques, such as in-person inspections and/or buying expensive datasets. Instead, the system and method enables scalable debris analysis by using remote imagery (e.g., satellite, aerial, drone imagery, etc.). [0023] Third, variants of the system and method return up-to-date results by using the most recent imagery for every request (e.g., collected within the last day, the last week, within the last month, within the last 3-6 months, etc.). The system and method can additionally return results for imagery taken for multiple different points in time.
[0024] Fourth, the inventors have discovered that, in some variants, debris identification can be more accurate when performed on a wide-view image of the area of interest, instead of identifying debris solely based on the image segment associated with the parcel. This is because the parcel’s image segment can cut off portions of known, non- -debris objects (e.g., portions of a tree), thereby resulting in said objects being erroneously classified as debris.
[0025] Fifth, the inventors have discovered that, because there is a low number of properties with yard debris and because there is a wide variety of debris that can possibly occur, the yard debris dataset can be too sparse to adequately train a debris classifier, and potentially be biased (e.g., against certain socioeconomic demographics). In these variants, the system and method can still reliably detect yard debris by detecting and removing non-debris objects from foreground objects detected in the property image, and treating the remaining foreground objects as debris.
[0026] However, the method and system can confer any other suitable benefits.
[0027] Debris can include: junk cars (e.g., vehicles surrounded by debris, on the parcel but not in the driveway, etc.), appliances, construction materials, disorganized stacks of boxes, man-made piles of indeterminate composition, and/or any other debris.
[0028] Non-debris can include property features: structures (e.g., primary structure, such as the primary residence, commercial building, etc.; roofs; etc.); secondary structure, such as an additional dwelling unit, shed, garage, paved surfaces, water features, furniture (e.g., patio and/or lawn chairs, hammock, trampoline, etc.), vegetation excluding fallen trees (e.g., maintained gardens, grass, trees, etc.), and/or any other non- debris. Non-debris can exclude any features that do not have a positive height (e.g., holes, pools, etc.) or are themselves surfaces (e.g., driveways, sidewalks, patios, decks, etc.), but can additionally or alternatively include said features. 4. System.
[0029] The method is preferably performed using the system 20, including: one or more computing systems 100, one or more datastores 200, and/or any other suitable components.
[0030] The computing system can include a remote computing system (e.g. one or more servers), a user device (e.g., smartphone, laptop, desktop, etc.), and/or other computing system. The computing system can be used with a user interface, wherein the computing system can receive an identifier from the user device which can be used to retrieve a region image from the datastore. Additionally or alternatively, the computing system can receive a region image from the datastore in response to the datastore receiving an identifier from the user device.
[0031] In some embodiments, the computing system can include a remote computing system and a user device that interfaces with the remote computing system via an API. In some embodiments, the computing system can include a remote computing system that interfaces with a third-party via an API.
[0032] The computing system can include one or more modules. The one or more modules can include: a debris module, a parcel representation module, a set of non-debris modules, and/or any other suitable module. The debris module functions to identify debris segments within an image, and can optionally analyze the debris segments (e.g., determine a debris score based on the debris segments). The debris module can include (and/or be split into) a debris representation module (e.g., to determine one or more debris representations), a debris score module, and/or any other suitable debris module. The parcel representation module functions to determine one or more parcel representations (e.g., parcel boundaries, parcel masks, etc.) of a property parcel (e.g., land parcel).
[0033] Each module can include one or more: classification models, neural networks, regression models, segmentation models, sets of equations, sets of heuristics or rules, and/or be otherwise constructed. Alternatively, classification models can be separate from the modules. The modules can receive as input the region image, optionally depth information (e.g., a 3D point cloud, a digital surface map, a digital elevation map, etc.), and/or any other suitable information of the region. The modules preferably output one or more representations, but can additionally or alternatively output one or more classes (e.g., for the image, for a pixel, for an image segment, etc.), one or more scores, and/or any other suitable information. The representation is preferably a heatmap, but can additionally or alternatively be a mask, one or more segments, one or more boundaries, a score, and/or any other information. However, the modules can include other model types.
[0034] The modules can include machine learning models, sets of rules, heuristics, and/or any other suitable model. The modules can be neural networks (e.g., DNN, CNN, RNN, etc.), decision trees, SVMs, regressions, Naive Bayes, clustering algorithms (e.g., k- nearest neighbors, k-means, etc.), and/ or any other suitable machine learning model. The modules can be semantic segmentation models, instance-based segmentation models, object detection models (e.g., YOLO), and/or any other segmentation model. The modules can be binary classifiers (e.g., roof vs background, ground vs. non-ground, shadow vs. non-shadow, vegetation vs. non-vegetation, etc.), a multi-class classifier (e.g., multiple labels such as roof, ground, vegetation, shadow, etc.), and/ or any other suitable classifier. During inference, the method can: use the same trained segmentation model in all contexts, selectively use the trained segmentation model based on the location context, and/or otherwise use the trained segmentation model. Examples of location context include: location information (e.g., city, neighborhood, street, etc.); zoning; developed environment class (e.g., urban, suburban, rural, exurban, etc.); average distance between buildings (e.g., determined based on the parcel data); debris exceeding a predetermined percentage threshold; predetermined object presence in the image; and/or other contextual parameters.
[0035] The modules can be trained using supervised learning (e.g., trained using training data), unsupervised learning, semi-supervised learning, and/or any otherwise trained. The training data can be generated by the system and method and/ or generated separately.
[0036] The system can be used with region imagery depicting all or a portion of a parcel, multiple parcels, and/ or any other suitable imagery. [0037] The system can include or be used with one or more datastores, which can store the region imagery, parcel representations, and/or any other information. The datastore can be queried to retrieve the region imagery, parcel representations, and/or any other suitable information used to perform the method. The query can include geographic coordinates, an address, and/or any other property identifier (e.g., used to identify a parcel and/ or group of parcels).
[0038] The region imagery, parcel representation, non-debris representation, and/or other information can be associated with a property identifier, can be georeferenced (e.g. associated with one or more geographic coordinates), and/or associated with any other suitable information. The property identifier can include: an address, a set of geographic coordinates, a landmark name, a lot number, a parcel number, a location identifier (e.g., Google Plus Codes™, Geohashes™, Place Key™, etc.), and/or any other suitable location identifier.
[0039] However, the system can include any other suitable components.
5. Method.
[0040] The method for automatic debris detection can include: determining a region image S100; optionally determining parcel representations associated with the property identifier S200; generating a debris representation using the region image S300; optionally generating a debris score based on the debris representation S400; optionally monitoring the debris information over time S500; and optionally returning debris information S600. However, the method can additionally or alternatively include any other suitable elements.
[0041] The method can be performed for one or more properties (e.g., in parallel, in series, etc.). Examples of properties can include: a parcel of land, a built structure (e.g., a building, a pool, accessory structures such as decks, etc.), and/ or any other suitable property. For example, the method can be performed for a single property, identified in a request. In a second example, the method can be performed for a plurality of properties (e.g., in a batch), wherein a different instance of the method is applied to each property within the plurality (e.g., property appearing in a region image, property identified in a list, etc.). The resultant debris information can be stored in association with the property identifier for the respective property.
[0042] The method is preferably performed by the system discussed above, but can be otherwise performed.
[0043] All or portions of the method can be performed when a debris information request is received for one or more properties, when a new region image is received, and/or at any other suitable time. The debris information can be determined in response to the request, be pre-calculated, and/or calculated at any other suitable time. The debris information can be returned (e.g., sent to the user) in response to the request.
5.1 Determining a region image S100.
[0044] Determining a region image S100 can function to determine an image for the debris model. The region image can depict a property feature, debris, and/ or any other elements. The region image can optionally be determined based on a received property identifier (e.g., address, location, latitude and longitude coordinates, etc.), such as received from a user device or an API. The region image can be received from the datastore that stores region imagery and/ or retrieved from the datastore, such as using a property identifier. The region image can be imagery of the parcel associated with the property identifier, imagery of multiple parcels that include the parcel associated with an property identifier, and/or any other image.
[0045] The region image can be remote imagery (e.g., aerial imagery, etc.), be crowdsourced for a geographic region, or other imagery. Remote imagery can include imagery captured using: drones; aircraft (e.g., fixed-wing, rotary-wing, etc.); balloons; satellites; terrestrial vehicles; user devices (e.g., smartphones, augmented reality devices, glasses, etc.); and/ or otherwise captured. The region image can depict a geographic region larger than a predetermined area threshold (e.g., average parcel area, manually determined region, image-provider-determined region, etc.), a large-geographic-extent (e.g., multiple acres that can be assigned or unassigned to a parcel), encompass one or more regions, such as parcels, and/ or any other suitable geographic region. The region image can include top-down views of the region (e.g., nadir images, panoptic images), but can additionally or alternatively include views from other angles (e.g., oblique imagery, street view imagery) and/or other views. The region image is preferably 2D, but can additionally or alternatively be 3D and/or have any other suitable dimension. The region image can be associated with depth information (e.g., terrain information, property feature information, etc.), and/ or other information or data. The region images can be red-green-blue (RGB), hyperspectral, multispectral, black and white, IR, NIR, UV, and/ or captured using any other suitable wavelength. The region image is preferably orthorectified, but can be otherwise processed. The region image can additionally or alternatively include any other suitable characteristics.
[0046] The region image can be associated with geographic data; time data (e.g., recurrent time, unique timestamp); and/or other data. The region imagery is preferably pixel-aligned with geographic coordinates, but can be offset, aligned within a threshold margin of error, or otherwise aligned. Examples of geographic data can include: a geolocation (e.g., of an image centroid, such as geographic coordinates); a geographic extent (e.g., area, range of geographic coordinates, etc.); municipal labels (e.g., set of addresses, a set of parcel identifiers or APNs, counties, neighborhoods, cities, etc.); and/or other geographic data.
[0047] In a first variant, the method can include receiving an address from a user device and using the address to retrieve a region image (e.g., from a third-party API, from a database of region images, etc.), wherein the region image encompasses the property identified by the address.
[0048] In a second variant, the region image can be determined in response to: determining a geographic descriptor associated with the property identifier, transmitting the descriptor to a third-party and receiving the region image associated with the geographic descriptor. The geographic descriptor can include: a geographic coordinate (e.g., determined using conventional geocoding methods), a parcel identifier, a municipal identifier (e.g., determined based on the ZIP, ZIP+4, city, state, etc.), or other descriptor. [0049] In a third variant, the region image can be determined in response to querying the datastore, wherein the query can include the address and/or the geographic descriptor. [0050] In a fourth variant, Sioo includes: receiving the property identifier, determining a set of geolocations (e.g., a geofence, the parcel boundaries, etc.) associated with the property identifier, retrieving images encompassing the geolocations, and optionally stitching the images together to form the region image. The images can be: contemporaneously sampled images of the geolocations (e.g., sampled in the same pass), wherein the geolocations do not appear in the same image frame; images from multiple timepoints (e.g., different times of day, different days of the week, etc.), wherein different portions of the property are occluded in each image; and/ or otherwise related.
[0051] However, the region image can be otherwise determined.
5.2 Determining a parcel representation for the region image S200.
[0052] Determining a parcel representation for the region image S200 can function to determine a subregion associated with the property’s parcel and/or surrounding a property feature, such that the debris analysis is limited to the property’s debris. The parcel representations can be used to segment the region image (e.g., before S300, such that the region image only represents the parcel), used to mask the debris representation, used to calculate the debris score (e.g., when the debris score is representative of the proportion of a parcel covered by debris), and/or otherwise calculated. The parcel representation preferably represents the property parcel, but can additionally or alternatively represent a geographic region associated with a property or built structure (e.g., region surrounding the built structure, patio associated with the built structure, etc.), represent a larger or smaller geographic region, and/ or represent any other suitable region.
[0053] The parcel representations can include polygon geometries, masks, geofences, values, and/or any other suitable information. The parcel representations can be binary masks and/ or any other labelled mask. The parcel representations can include: a parcel boundary mask, a vegetation mask, shadow masks, a primary structure mask (e.g., primary residence and/or primary building), secondary structure masks (e.g., guest house, pool house, gazebo, shed, any other structure separate from the primary structure, and/or not separate from the primary structure, such as a garage, attached shed, etc.), parcel feature masks (e.g., pool, court, driveway, vegetation, etc.), and/or any other suitable mask. The parcel representations can be determined after S300, concurrently with S300 (e.g., using the same or different image or data source), before S300, and/or determined at any other time.
[0054] In a first variation, the parcel representations can be determined using the parcel representation module (e.g., from municipal parcel data, architectural diagrams, etc.).
[0055] In a second variation, the parcel representations can be retrieved from the datastore (e.g., the parcel representations can be predetermined and stored in the datastore).
[0056] In a third variation, the parcel representation can be defined by an area whose boundary is a predetermined distance from (and/or geofence around) a predetermined location (e.g., primary structure, parcel centroid, etc.) (e.g., as shown in FIGURE 6). The predetermined distance can be: manually specified, determined based on computational constraints, and/or otherwise determined.
[0057] In a specific example, the parcel boundary representation can be determined based on the parcel area. When the parcel area is larger than a threshold size, a subset of the parcel can be used as the parcel boundary (e.g., within a predetermined distance of the centroid of the primary structure, such as within 10 feet, 20 feet, 30 feet, 50 feet, etc.). The cropped parcel boundary can be circular, square, rectangular, irregular, and/or any other suitable shape.
[0058] However, the parcel representations can be otherwise determined.
5.3 Generating a debris representation using the region image and the parcel representations S300.
[0059] Generating a debris representation using the region image and the parcel representations S300 can function to determine debris and/or non- debris segments in the region image.
[0060] The debris representation is preferably a mask, but can additionally or alternatively be a heatmap, set of segments (e.g., image segments), a semantic segmentation (e.g., with pixel blobs associated with a debris tag), a set of bounding boxes around the debris (e.g., polygons, rectangles, etc.), and/or any other representation. The debris representation can be determined after determining the region image, optionally after determining the parcel representations, concurrently with determining the parcel representations, before determining the parcel representations, and/or performed at any other time. The debris representation is preferably determined by the debris model, but can be additionally or alternatively determined by any other model, rule, heuristic, or otherwise determined. The debris model is preferably a binary classifier (e.g., outputting the probability of whether a pixel represents debris or not), but can alternatively be a multiclass classifier (e.g., classifying each pixel with a debris type), an object detector (e.g., trained to detect yard debris), and/or other model.
[0061] Optionally the debris representation can include labels for non- debris objects (e.g., a separate label for roof, pool, etc.). The non-debris objects can be determined by the debris model, by a separate non-debris model (e.g., a version of the debris model trained to label non-debris objects), and/ or any other suitable model. The non-debris segments can optionally be pre-determined and retrieved from the datastore (e.g., calculated during S200), determined in parallel with debris segmentation, and/or otherwise determined.
[0062] In a first variant, the debris module can receive an input region image and/or depth information that includes multiple parcels (e.g., determined in S100). The output of the debris module can include a mask of debris segments across the multiple parcels (e.g., each pixel is assigned a 1 if it is debris and o otherwise). In a specific example, the debris module outputs a heatmap with debris probabilities assigned per pixel, which is then thresholded (e.g., at a predetermined probability cutoff) to generate the mask. The parcel segment can optionally be isolated from this mask using the parcel representation. [0063] In a second variant, the debris module can receive as input a region image and/or depth information of the associated parcel area (e.g., determined by combining the region image and the parcel boundary representation determined in S200, such as by masking out all areas of the region image excluding the parcel area defined by the parcel boundaries). The output of the debris module can include a binary debris mask (e.g., each pixel is assigned a 1 if it is debris and o otherwise) and/or any other suitable representation. [0064] In a third variant, the debris representation can be cooperatively formed from representations of one or more debris classes. In this variant, one or more debris modules can identify (e.g., segment, semantically segment) one or more classes of debris (e.g., trash, fallen vegetation, engines, rusted cars, etc.). The segments output by the debris modules can be aggregated to form the debris representation.
[0065] In a fourth variant, the debris module can determine a foreground map by segmenting ground pixels from non-ground pixels (e.g., using background / foreground segmentation), wherein the non-ground pixels can represent debris. The foreground map can be determined based on the region image, depth information, and/or any other suitable information.
[0066] In a fifth variant, the debris representation (e.g., output by the fourth variant) can be modified to remove non-debris objects, such as to label and/or correct debris and/or non-debris classifications in the debris representation (e.g., when a ground classifier is used as the debris model, to remove non-debris objects, etc.) and/or provide any other functionality. The modified representation can be determined by subtracting parcel masks or other masks (e.g., vegetation masks, shadow masks, primary structure mask, secondary structure masks, and/or other parcel feature masks, etc.) retrieved from storage or determined by the computing system (e.g., from the same or contemporaneously-sampled images) from the debris representation, wherein S400 is performed on the resultant representation. This can be performed before or after parcel region isolation.
[0067] In this variant, the method can include: determining a foreground map based on the region image using a first model of the computing system (e.g., a first classification model); determining set of non-debris representations within the region image using a set of non-debris models (e.g., secondary classification models); and identifying debris within the region image.
[0068] The set of non-debris representations (e.g., property features) can include masks (e.g., feature masks), maps, heatmaps, segments, and/or other representations associated with one or more non-debris classes of objects (e.g., non-debris features). For example, the non-debris representations can include: built structure maps, vegetation maps, pool maps, umbrella maps, and/or other spatial representations of non-debris. The non-debris representations can be determined by: a single non-debris module (e.g., configured to identify a predetermined set of non-debris), a different non-debris module for each non-debris class (e.g., a built structure classifier, a vegetation classifier, etc.), and/or any other suitable set of modules. The non-debris representation can be determined based on a region image. The non-debris region image can be the region image from Sioo, the same region image used to determine the debris representation, a different region image from Sioo or debris representation determination, a region image of the same region or depicting the same property, and/or be any other suitable region image. Alternatively, the non-debris representation can be retrieved from a database, or otherwise determined. The non-debris representation can be determined as part of S300, determined before S300, and/or otherwise determined.
[0069] Identifying debris in the region image can include: optionally isolating the parcel in the foreground map using the parcel representation (e.g., masking out parcels of the foreground map not represented by the parcel representation), and removing the non-debris features (e.g., property features, etc.) by subtracting the non-debris representation (e.g., non-debris feature mask(s)) from the foreground map (e.g., relabelling the pixels as background, masking out the pixels associated with the non-debris feature mask(s)), wherein the remaining foreground segments (e.g., pixels) can be representative of debris. Identifying debris can optionally include removing foreground segments (e.g., debris segments) satisfying a set of conditions (e.g., smaller than a threshold area, having less than a threshold dimension, etc.). However, the debris can be otherwise identified in the region image.
[0070] In the above variants, when depth is used, the depth information can be used to classify pixels as foreground (e.g., debris) or background (e.g., non-debris). In a first example, the depth information can be used to determine ground pixels in a region image. In a second example, the depth information can be used to determine vegetation and/ or property features in a region image (e.g., using heuristics, algorithms, and/ or rules for processing the depth information). [0071] Additionally or alternatively, the debris representation can be determined from the depth information, wherein the debris representation can be 3D (e.g., include a 2D footprint and height). In this variant, the depth information can be intersected with the 2D debris representation (e.g., the depth information can be masked with the 2D debris representation, etc.), to identify the points, voxels, mesh cells, and/or other depth units associated with the 2D debris segments.
[0072] However, the depth information can be otherwise used.
[0073] In some variants, certain classes of debris can be selectively excluded from the debris representation. These variants can leverage debris modules that can identify (e.g., segment, semantically segment) a given debris class, wherein the output segments can be removed from the overall debris representation. Alternatively, the results of those debris modules can be omitted from the debris representation (e.g., in variants where the debris representation is aggregated from multiple debris classes). Certain debris classes can be excluded when: the property is associated with the debris class (e.g., the property is a car junkyard, the property is a used car lot, the property is an outdoor engine storage space, etc.), when a user requests that the debris class be omitted, or when any other suitable exclusion condition is met. The property-debris class association can be determined: manually, from a business listing, from the parcel zoning, the property zoning (e.g., single family home, multi -family home, commercial, etc.), and/or otherwise determined.
[0074] However, the debris representation can be otherwise determined.
[0075] The method can optionally include determining debris parameters.
Examples of debris parameters can include: debris composition, debris location, debris size, debris volume, debris class, debris density, debris temperature, debris formation rate, and/ or any other suitable parameter. The debris parameters can be determined from the debris representation, from the depth information (e.g., masked with the debris representation), and/or otherwise determined. For example, the debris area can be calculated from the size of the debris representation. In another example, the debris volume can be determined from the depth map or point cloud segment overlapping the debris representation. However, the debris parameters can be otherwise determined. [0076] The method can optionally include determining a debris class for the debris.
The debris class can be determined from the debris representation, from the depth map, and/or from any other suitable information. The debris class is preferably determined by a model trained to classify the debris (e.g., debris classifier), but can be manually or otherwise determined. The model can be a binary model (e.g., specific to a debris class), a multiclass model (e.g., trained to classify the debris as one or more of a set of potential debris classes), and/ or any other suitable model. Examples of debris classes can include: flammable / nonflammable; easily mitigated/ not easily mitigated; composition (e.g., vegetation, machinery, metal, plastic, etc.), and/or any other suitable class.
[0077] However, any other suitable debris information can be determined.
5.4 Generating a debris score based on the debris representation S400.
[0078] The method can optionally include generating a debris score using the debris representation S400 can function to determine an amount of debris on a parcel. The debris score can be stored in the datastore (e.g., to enable score monitoring in S500) or not stored. The debris score can optionally be presented to a user.
[0079] The debris score can represent an area of the parcel covered by debris, the area of the parcel covered by debris divided by the parcel size (e.g., percent of the parcel covered by debris; percent of the parcel representation intersecting the debris representation; etc.), the area surrounding a structure of a predetermined radius that is covered by debris, the volume of debris (e.g., determined from a 3D debris representation), ratio of the primary building or built structure area to the debris area, proximity of the debris to a built structure (e.g., proximity of any debris to a primary building, proximity of the debris centroid to the built structure, etc.), the typicality of the property’s amount of debris for the property’s area (e.g., normalized by the debris scores for other properties in the neighborhood), the quality of the debris (e.g., level of degradation, type of debris, etc.), the location of the debris (e.g., driveway debris may result in a higher score than backyard debris; determined by intersecting the debris representation with a driveway representation determined by a driveway module, etc.), the debris’ temporal change (e.g., aggregation over time, change over time, duration, etc.; determined from a series of images of the same property), and/or any other suitable measurement. The debris score can be a percentage (e.g., of the parcel, of the unbuilt square footage); a measurement (e.g., total square area, number of debris piles, etc.); distribution metric (e.g., based on proximity to the primary structure); a binary score (e.g., in or on a property feature or not, such as inside a water feature, on a paved surface, etc.); a classification (e.g., high amount of debris, medium amount of debris, low amount of debris, etc.), a comparison to a reference area score (e.g., the difference to a reference score, a percentile for the reference area, etc.), wherein the reference area can be: neighborhood, geofenced area, town, city, county, state, province, country; a proximity to a built structure and/or other property feature, and/or any other suitable value. The debris score is preferably generated using the debris representation (e.g., including only the parcel region; including multiple parcel regions; etc.), the parcel representation (e.g., parcel masks, boundaries, etc.), and/or any other information.
[0080] In a first variant, the debris score can be calculated by converting the area of the debris, represented by the modified and/or original debris representation, into a measurement (e.g., square footage, square meters, etc.). This can be done based on a known square-area to pixel mapping, or otherwise determined.
[0081] In a second variant, the debris score can be calculated by converting the area of the debris represented by the modified and/or original debris representation into a measurement and dividing the amount by the size of the parcel.
[0082] In a third variant, the debris score can be calculated by converting the area labelled as ground into the debris score. Optionally, non- debris objects can be removed from the debris representation before calculating the debris score. Optionally, parcel representations can be combined with the debris representation to more accurately calculate the debris score (e.g., based on the debris representation and one or more parcel representations).
[0083] In a fourth variant, a mask of a property feature (e.g., calculated in S300) can be used to determine an instance of the property feature in the region image and/or foreground map, and the debris score can be calculated based on the debris within the area of the region image defined by the mask of the property feature. [0084] In a first example, the property feature can be a water feature on the parcel and the debris score can be calculated for the debris in the water feature (e.g., debris in a pool, debris in a lake, debris in a fountain, debris in a pond, etc.).
[0085] In a second example, the property feature can be a paved surface (e.g., driveway, walkway, etc.) and the debris score can be calculated based on the debris that is on the paved surface.
[0086] In a fifth variant, the debris representation can be fed as input into a classification model of the computing system and the debris score is output by the classification model. The classification model can be trained on a set of debris representations, each associated with a debris score. In a specific example, the classification model can be a regression model.
[0087] However, the debris score can be otherwise determined.
5.5 Monitoring the debris information over time S500.
[0088] The method can optionally include monitoring the debris information over time S500, which can function to determine changes to the property and/or debris over time. The monitored debris information can include: the debris score, the debris representation, the debris parameters, and/or other information. The changes can be quantified (e.g., rate of change over time), and can be used in property analyses or otherwise used. The changes can include: more debris accumulation over time, less debris accumulation over time, debris relocation over time (e.g., relocation of the same debris instance, relocation of different debris classes, relocation of a centroid of the debris, etc.), change in debris composition over time, debris residency duration (e.g., how long the debris remains on the parcel, which can be indicative of built structure construction or modification), and/ or any other suitable set of changes.
[0089] S500 can include redetermining (e.g., recalculating) the debris information
(e.g., using S100-S300, S100-S400, etc.) and storing the debris information in association with a timestamp (e.g., of the region image) and the property identifier. The debris information can be re-calculated periodically (e.g., every month, every 3 months, every 6 months, every 12 months, every new image for the property, etc.), or determined at any other suitable frequency. The debris information can be sent to the user in response to an API query, recorded in the database associated with as an entry associated property identifier, sent to a third party system, and/ or otherwise used. The debris information can be monitored using one or more time series processes to determine historical change over time, to predict accumulation of debris over time, and/ or otherwise used. The time series processes can include: recurrent neural networks, autoregression, moving average, autoregressive moving average, autoregressive integrated moving-average, seasonal autoregressive integrated moving-average, vector autoregression, vector autoregression moving-average, exponential smoothing, and/ or any other suitable process. However, the debris information can be otherwise monitored.
5.6 Returning the debris score S600.
[0090] The method can optionally include returning the debris information S600, which can function to provide debris information to an endpoint (e.g., the requesting endpoint). For example, S600 can function to return the debris information in response to a request, store the debris information in the datastore for future retrieval, and/or provide any other functionality. The debris information can include: the debris score (e.g. determined in S400 and/or S500), the debris representation (e.g., determined in S300), the debris class (e.g., wherein the debris representation and/or image segment corresponding to debris is fed to a debris classification model), the debris parameters, and/or other debris information.
[0091] The debris information can be presented as part of a code block (e.g., data returned in response to an API request), presented in a user interface (e.g., in a user facing application, such as on a user device), and/ or otherwise returned. The debris information can be automatically returned in response to an event, or returned at any other suitable time. The event can include the debris score falling above or below a predetermined threshold, a re-calculation of the debris score (e.g., when updated imagery is received by a system for properties), receipt of a debris information request, and/ or any other suitable event. The debris information can be automatically returned to a third-party system (e.g., to update models and/ or model outputs that are generated based on the debris score), to the datastore, to a third-party datastore, to a user, and/or to any other suitable entity. However, the debris information can be otherwise returned. [0092] Different processes and/or elements discussed above can be performed and controlled by the same or different entities. In the latter variants, different subsystems can communicate via: APIs (e.g., using API requests and responses, API keys, etc.), requests, and/or other communication channels.
[0093] Alternative embodiments implement the above methods and/or processing modules in non-transitory computer-readable media, storing computer- readable instructions that, when executed by a processing system, cause the processing system to perform the method(s) discussed herein. The instructions can be executed by computer-executable components integrated with the computer-readable medium and/or processing system. The computer-readable medium may include any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, non-transitory computer readable media, or any suitable device. The computer-executable component can include a computing system and/or processing system (e.g., including one or more collocated or distributed, remote or local processors) connected to the non-transitory computer- readable medium, such as CPUs, GPUs, TPUS, microprocessors, or ASICs, but the instructions can alternatively or additionally be executed by any suitable dedicated hardware device.
[0094] Embodiments of the system and/ or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/ or using one or more instances of the systems, elements, and/or entities described herein.
[0095] As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims.

Claims

We Claim:
1. A method for automatic debris detection comprising:
• determining a region image depicting a built structure;
• determining a parcel boundary surrounding the predetermined built structure;
• determining a foreground map based on the region image using a first model;
• calculating a built structure map for the predetermined built structure within the region image using a second model;
• identifying debris within the region image based on the parcel boundary, the foreground map, and the built structure map; and
• generating a debris score for the built structure based on the identified debris.
2. The method of Claim 1, wherein identifying the debris within the region image comprises subtracting the built structure map from features of the foreground map falling within the parcel boundary.
3. The method of Claim 1, further comprising monitoring the debris score over time.
4. The method of Claim 1, further comprising identifying a feature instance for a predetermined property feature, and wherein generating the debris score further comprises generating a feature debris score based on an overlap of the foreground heatmap and the feature instance.
5. The method of Claim 1, wherein the debris score is generated based on depth information.
6. The method of Claim 5, wherein the depth information is used to differentiate terrain from foreground objects.
7. The method of Claim 1, wherein the region image is a remote image.
8. The method of Claim 1, wherein the debris score represents a percentage of a parcel, defined by the parcel boundary, that is covered in debris.
9. The method of Claim 1, further comprising calculating a vegetation map using a third model based on the region image; removing vegetation features from the foreground map using the vegetation map; and generating the debris score based on the foreground map with the vegetation features removed. io. The method of Claim i, wherein the debris score is used as an input to an automated valuation model. n. The method of Claim i, wherein the region image depicts a plurality of built structures, wherein a different area is determined for each built structure depicted within the region image, and wherein a different debris score is generated for each built structure based on regions of the foreground map and the built structure map overlapping the respective area.
12. The method of Claim 1, further comprising receiving a debris score request, wherein the debris score is provided responsive to the debris score request.
13. The method of Claim 1, further comprising receiving a debris score request, wherein the region image is determined responsive to the debris score request.
14. A system for automatic debris detection, comprising a computer processor configured to:
• receive a region image depicting a set of properties;
• determine a geographic region for each of the set of properties;
• determine a foreground map based on the region image using a first model;
• determine a set of non-debris maps based on the region image using a set of second models;
• identify debris within the region image based on the foreground map and the set of non-debris maps; and
• generate debris information for each property in the set based on the identified debris and the respective geographic region.
15. The system of Claim 14, wherein the region image comprises depth information and wherein the foreground map is calculated based on the depth information.
16. The system of Claim 14, wherein the debris information comprises at least one of a debris score or a debris representation.
17. The system of Claim 16, wherein an inspection notification is automatically sent to an inspector based on a value of the debris score.
18. The system of Claim 16, wherein the debris score for a property represents a percentage of the respective geographic region that is covered in debris.
19. The system of Claim 14, wherein the set of non-debris maps comprise at least a built structure map and a vegetation map, wherein identifying the debris comprises subtracting the set of non-debris maps from the foreground map.
20. The system of Claim 14, wherein the region image only depicts a single geographic area for a single property.
21. A method, comprising:
• determining an image of a property;
• identifying debris based on the image; and
• returning debris information determined based on the identified debris.
22. The method of claim 21, wherein the image comprises a remote image.
23. The method of claim 21, wherein the property comprises a built structure
24. The method of claim 21, wherein identifying the debris comprises:
• identifying foreground segments within the image;
• identifying non-debris segments within the image; and
• identifying the debris by subtracting the non-debris segments from the foreground segments.
25. The method of claim 24, wherein the non-debris segments are identified using a set of models, each trained to identify an different non-debris object class.
26. The method of claim 25, wherein the non-debris object classes comprise at least one of: a built structure or accessory structure.
27. The method of claim 21, wherein the debris is further determined based on a parcel boundary associated with the property.
28. The method of claim 21, wherein the debris information comprises at least one of a debris score and a debris representation.
29. The method of claim 28, wherein the debris representation comprises a bounding box around the identified debris.
30. The method of claim 28, wherein the debris score comprises a percentage of the property that is covered by the identified debris.
31. A system, configured to perform any of the methods of claims 21-30.
32. A method, comprising using any of the systems of claims 1-20.
PCT/US2021/055227 2020-10-15 2021-10-15 Method and system for automated debris detection WO2022082007A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063092283P 2020-10-15 2020-10-15
US63/092,283 2020-10-15

Publications (1)

Publication Number Publication Date
WO2022082007A1 true WO2022082007A1 (en) 2022-04-21

Family

ID=81186305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/055227 WO2022082007A1 (en) 2020-10-15 2021-10-15 Method and system for automated debris detection

Country Status (2)

Country Link
US (2) US11367265B2 (en)
WO (1) WO2022082007A1 (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8872818B2 (en) 2013-03-15 2014-10-28 State Farm Mutual Automobile Insurance Company Methods and systems for capturing the condition of a physical structure
US8874454B2 (en) 2013-03-15 2014-10-28 State Farm Mutual Automobile Insurance Company Systems and methods for assessing a roof
US10019761B2 (en) 2014-04-25 2018-07-10 State Farm Mutual Automobile Insurance Company System and method for virtual inspection of a structure
US10733671B1 (en) 2014-04-25 2020-08-04 State Farm Mutual Automobile Insurance Company Systems and methods for predictively generating an insurance claim
US9786011B1 (en) 2014-09-02 2017-10-10 State Farm Mutual Automobile Insurance Company System and method for using object recognition to facilitate the collection of insurance information
US20210256614A1 (en) 2014-09-22 2021-08-19 State Farm Mutual Automobile Insurance Company Theft identification and insurance claim adjustment using drone data
US9129355B1 (en) 2014-10-09 2015-09-08 State Farm Mutual Automobile Insurance Company Method and system for assessing damage to infrastructure
US10210577B1 (en) 2015-04-17 2019-02-19 State Farm Mutual Automobile Insurance Company Electronic device data capture for property insurance quotes
US10691314B1 (en) 2015-05-05 2020-06-23 State Farm Mutual Automobile Insurance Company Connecting users to entities based on recognized objects
US10977734B1 (en) 2015-05-29 2021-04-13 State Farm Mutual Automobile Insurance Company Method and system for collaborative inspection of insured properties
US10755357B1 (en) 2015-07-17 2020-08-25 State Farm Mutual Automobile Insurance Company Aerial imaging for insurance purposes
US10672081B1 (en) 2017-01-04 2020-06-02 State Farm Mutual Automobile Insurance Company Providing data associated with insured losses
US10663591B2 (en) 2017-05-17 2020-05-26 State Farm Mutal Automobile Insurance Company Robust laser scanning for generating a 3D model
US10825564B1 (en) 2017-12-11 2020-11-03 State Farm Mutual Automobile Insurance Company Biometric characteristic application using audio/video analysis
US10825100B1 (en) 2019-01-15 2020-11-03 State Farm Mutual Automobile Insurance Company System and method for analyzing a survivability of a structure based on proximate objects
US10825161B1 (en) 2019-02-19 2020-11-03 State Farm Mutual Automobile Insurance Company System and method for analyzing an integrity of a roof covering
US11687318B1 (en) 2019-10-11 2023-06-27 State Farm Mutual Automobile Insurance Company Using voice input to control a user interface within an application
US11748901B1 (en) 2020-02-11 2023-09-05 State Farm Mutual Automobile Insurance Company Using deep learning and structure-from-motion techniques to generate 3D point clouds from 2D data
US11734767B1 (en) 2020-02-28 2023-08-22 State Farm Mutual Automobile Insurance Company Systems and methods for light detection and ranging (lidar) based generation of a homeowners insurance quote
US11663550B1 (en) 2020-04-27 2023-05-30 State Farm Mutual Automobile Insurance Company Systems and methods for commercial inventory mapping including determining if goods are still available
US11861137B2 (en) 2020-09-09 2024-01-02 State Farm Mutual Automobile Insurance Company Vehicular incident reenactment using three-dimensional (3D) representations
WO2023283231A1 (en) 2021-07-06 2023-01-12 Cape Analytics, Inc. System and method for property condition analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190213438A1 (en) * 2018-01-05 2019-07-11 Irobot Corporation Mobile Cleaning Robot Artificial Intelligence for Situational Awareness
US20190213412A1 (en) * 2015-08-31 2019-07-11 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
US20190354772A1 (en) * 2016-12-26 2019-11-21 Argosai Teknoloji Anonim Sirketi A method for foreign object debris detection
US20200134753A1 (en) * 2018-10-31 2020-04-30 Alexander Vickers System and Method for Assisting Real Estate Holding Companies to Maintain Optimal Valuation of Their Properties

Family Cites Families (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2190596C (en) 1994-05-19 2002-03-26 Theodore M. Lachinski Method for collecting and processing visual and spatial position information
US6044171A (en) * 1995-05-09 2000-03-28 Polyakov; Vladislav G. Method and apparatus for pattern recognition and representation using fourier descriptors and iterative transformation-reparametrization
US6249616B1 (en) 1997-05-30 2001-06-19 Enroute, Inc Combining digital images based on three-dimensional relationships between source image data sets
US7106891B2 (en) 2001-10-15 2006-09-12 Insightful Corporation System and method for determining convergence of image set registration
US6738063B2 (en) 2002-02-07 2004-05-18 Siemens Corporate Research, Inc. Object-correspondence identification without full volume registration
US20050084178A1 (en) 2002-12-30 2005-04-21 Lure Fleming Y. Radiological image processing based on different views of temporal images
US8207964B1 (en) 2008-02-22 2012-06-26 Meadow William D Methods and apparatus for generating three-dimensional image data models
US7327902B2 (en) 2004-12-10 2008-02-05 Radiological Imaging Technology, Inc. Optimizing image alignment
US7657126B2 (en) 2005-05-09 2010-02-02 Like.Com System and method for search portions of objects in images and features thereof
US7945117B2 (en) 2006-08-22 2011-05-17 Siemens Medical Solutions Usa, Inc. Methods and systems for registration of images
US7873238B2 (en) 2006-08-30 2011-01-18 Pictometry International Corporation Mosaic oblique images and methods of making and using same
US8145578B2 (en) 2007-04-17 2012-03-27 Eagel View Technologies, Inc. Aerial roof estimation system and method
US8078436B2 (en) 2007-04-17 2011-12-13 Eagle View Technologies, Inc. Aerial roof estimation systems and methods
US8531472B2 (en) 2007-12-03 2013-09-10 Pictometry International Corp. Systems and methods for rapid three-dimensional modeling with real façade texture
US8075209B2 (en) 2008-03-31 2011-12-13 Xerox Corporation Method of printing images to compensate for pile height differential
TW201005673A (en) * 2008-07-18 2010-02-01 Ind Tech Res Inst Example-based two-dimensional to three-dimensional image conversion method, computer readable medium therefor, and system
US8209152B2 (en) 2008-10-31 2012-06-26 Eagleview Technologies, Inc. Concurrent display systems and methods for aerial roof estimation
US8731234B1 (en) 2008-10-31 2014-05-20 Eagle View Technologies, Inc. Automated roof identification systems and methods
US8170840B2 (en) 2008-10-31 2012-05-01 Eagle View Technologies, Inc. Pitch determination systems and methods for aerial roof estimation
US8391617B2 (en) 2008-11-04 2013-03-05 Eastman Kodak Company Event recognition using image and location information
US8401222B2 (en) 2009-05-22 2013-03-19 Pictometry International Corp. System and process for roof measurement using aerial imagery
WO2011048531A1 (en) 2009-10-22 2011-04-28 Koninklijke Philips Electronics N.V. Alignment of an ordered stack of images from a specimen.
US8655070B1 (en) * 2009-11-04 2014-02-18 Google Inc. Tree detection form aerial imagery
WO2011094760A2 (en) 2010-02-01 2011-08-04 Eagle View Technologies Geometric correction of rough wireframe models derived from photographs
US9117310B2 (en) * 2010-04-02 2015-08-25 Imec Virtual camera system
US8477190B2 (en) 2010-07-07 2013-07-02 Pictometry International Corp. Real-time moving platform management system
BR112013008350A2 (en) 2010-10-07 2016-06-14 Sungevity fast 3d modeling
US8977520B2 (en) 2010-10-21 2015-03-10 Pictometry International Corp. Computer system for automatically classifying roof elements
US8374428B2 (en) 2010-12-05 2013-02-12 Microsoft Corporation Color balancing for partially overlapping images
US8823732B2 (en) 2010-12-17 2014-09-02 Pictometry International Corp. Systems and methods for processing images with edge detection and snap-to feature
WO2012115594A1 (en) * 2011-02-21 2012-08-30 Stratech Systems Limited A surveillance system and a method for detecting a foreign object, debris, or damage in an airfield
US10663294B2 (en) 2012-02-03 2020-05-26 Eagle View Technologies, Inc. Systems and methods for estimation of building wall area and producing a wall estimation report
US8774525B2 (en) 2012-02-03 2014-07-08 Eagle View Technologies, Inc. Systems and methods for estimation of building floor area
US9599466B2 (en) 2012-02-03 2017-03-21 Eagle View Technologies, Inc. Systems and methods for estimation of building wall area
US10515414B2 (en) 2012-02-03 2019-12-24 Eagle View Technologies, Inc. Systems and methods for performing a risk management assessment of a property
US9933257B2 (en) 2012-02-03 2018-04-03 Eagle View Technologies, Inc. Systems and methods for estimation of building wall area
US9183538B2 (en) 2012-03-19 2015-11-10 Pictometry International Corp. Method and system for quick square roof reporting
EP2648414B1 (en) * 2012-04-03 2016-03-23 Samsung Electronics Co., Ltd 3d display apparatus and method for processing image using the same
CA2887763C (en) 2012-10-05 2023-10-10 Eagle View Technologies, Inc. Systems and methods for relating images to each other by determining transforms without using image acquisition metadata
US9811775B2 (en) 2012-12-24 2017-11-07 Google Inc. Parallelizing neural networks during training
US9159164B2 (en) 2013-01-31 2015-10-13 Eagle View Technologies, Inc. Statistical point pattern matching technique
US9147287B2 (en) 2013-01-31 2015-09-29 Eagle View Technologies, Inc. Statistical point pattern matching technique
US9082014B2 (en) 2013-03-14 2015-07-14 The Nielsen Company (Us), Llc Methods and apparatus to estimate demography based on aerial images
US9753950B2 (en) 2013-03-15 2017-09-05 Pictometry International Corp. Virtual property reporting for automatic structure detection
US9959581B2 (en) 2013-03-15 2018-05-01 Eagle View Technologies, Inc. Property management on a smartphone
JP6188400B2 (en) 2013-04-26 2017-08-30 オリンパス株式会社 Image processing apparatus, program, and image processing method
CN104346620B (en) * 2013-07-25 2017-12-29 佳能株式会社 To the method and apparatus and image processing system of the pixel classifications in input picture
US9679227B2 (en) 2013-08-02 2017-06-13 Xactware Solutions, Inc. System and method for detecting features in aerial images using disparity mapping and segmentation techniques
CN105593786B (en) 2013-11-07 2019-08-30 英特尔公司 Object's position determines
US9292913B2 (en) 2014-01-31 2016-03-22 Pictometry International Corp. Augmented three dimensional point collection of vertical structures
US9928347B2 (en) 2014-04-02 2018-03-27 University Of Louisville Research Foundation, Inc. Computer aided diagnostic system for classifying kidneys
WO2017142788A1 (en) 2016-02-15 2017-08-24 Pictometry International Corp. Automated system and methodology for feature extraction
US10529029B2 (en) 2016-09-23 2020-01-07 Aon Benfield Inc. Platform, systems, and methods for identifying property characteristics and property feature maintenance through aerial imagery analysis
US10650285B1 (en) 2016-09-23 2020-05-12 Aon Benfield Inc. Platform, systems, and methods for identifying property characteristics and property feature conditions through aerial imagery analysis
EP3340106B1 (en) * 2016-12-23 2023-02-08 Hexagon Technology Center GmbH Method and system for assigning particular classes of interest within measurement data
US10503843B2 (en) 2017-12-19 2019-12-10 Eagle View Technologies, Inc. Supervised automatic roof modeling
US10275689B1 (en) * 2017-12-21 2019-04-30 Luminar Technologies, Inc. Object identification and labeling tool for training autonomous vehicle controllers
CN108197583B (en) * 2018-01-10 2020-04-24 武汉大学 Building change detection method based on graph cut optimization and image structure characteristics
WO2019191329A1 (en) * 2018-03-28 2019-10-03 Betterview Marketplace, Inc. Property investigation system and method
CN108629287A (en) * 2018-04-09 2018-10-09 华南农业大学 A kind of remote sensing image terrain classification method
US11144758B2 (en) 2018-11-15 2021-10-12 Geox Gis Innovations Ltd. System and method for object detection and classification in aerial imagery
US10937178B1 (en) * 2019-05-09 2021-03-02 Zoox, Inc. Image-based depth data and bounding boxes
AU2020365115A1 (en) * 2019-10-18 2022-03-10 Pictometry International Corp. Geospatial object geometry extraction from imagery
GB2591332B (en) * 2019-12-19 2024-02-14 Motional Ad Llc Foreground extraction using surface fitting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190213412A1 (en) * 2015-08-31 2019-07-11 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
US20190354772A1 (en) * 2016-12-26 2019-11-21 Argosai Teknoloji Anonim Sirketi A method for foreign object debris detection
US20190213438A1 (en) * 2018-01-05 2019-07-11 Irobot Corporation Mobile Cleaning Robot Artificial Intelligence for Situational Awareness
US20200134753A1 (en) * 2018-10-31 2020-04-30 Alexander Vickers System and Method for Assisting Real Estate Holding Companies to Maintain Optimal Valuation of Their Properties

Also Published As

Publication number Publication date
US20220121870A1 (en) 2022-04-21
US20220277538A1 (en) 2022-09-01
US11367265B2 (en) 2022-06-21

Similar Documents

Publication Publication Date Title
US11367265B2 (en) Method and system for automated debris detection
US11232150B2 (en) System and method for geocoding
US11640667B2 (en) Method for property feature segmentation
Rosentreter et al. Towards large-scale mapping of local climate zones using multitemporal Sentinel 2 data and convolutional neural networks
Minetto et al. Measuring human and economic activity from satellite imagery to support city-scale decision-making during covid-19 pandemic
US11631235B2 (en) System and method for occlusion correction
US11875413B2 (en) System and method for property condition analysis
US11676298B1 (en) System and method for change analysis
Pradhan et al. Data mining-aided automatic landslide detection using airborne laser scanning data in densely forested tropical areas
US20220405856A1 (en) Property hazard score determination
Aahlaad et al. An object-based image analysis of worldview-3 image for urban flood vulnerability assessment and dissemination through ESRI story maps
US20240087131A1 (en) System and method for object analysis
Saboori et al. Combining multi-scale textural features from the panchromatic bands of high spatial resolution images with ANN and MLC classification algorithms to extract urban land uses
Ayo Integrating openstreetmap data and sentinel-2 Imagery for classifying and monitoring informal settlements
US11967097B2 (en) System and method for change analysis
US20230401660A1 (en) System and method for property group analysis
US20240087290A1 (en) System and method for environmental evaluation
McLaughlin et al. Change detection over the state of Queensland using high resolution Planet satellite mosaics
Owusu Spatio-temporal slum mapping at citywide scale using high-resolution images
Abraham et al. Classification and detection of natural disasters using machine learning and deep learning techniques: A review

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21881192

Country of ref document: EP

Kind code of ref document: A1