US20180260793A1 - Automatic assessment of damage and repair costs in vehicles - Google Patents
Automatic assessment of damage and repair costs in vehicles Download PDFInfo
- Publication number
- US20180260793A1 US20180260793A1 US15/973,343 US201815973343A US2018260793A1 US 20180260793 A1 US20180260793 A1 US 20180260793A1 US 201815973343 A US201815973343 A US 201815973343A US 2018260793 A1 US2018260793 A1 US 2018260793A1
- Authority
- US
- United States
- Prior art keywords
- vehicle
- image
- images
- damaged
- damage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/20—Administration of product repair or maintenance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0283—Price estimation or determination
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2008—Assembling, disassembling
Definitions
- FIG. 3 is a block diagram illustrating components for a server from the system illustrated in FIG. 1 according to some embodiments of the disclosure.
- FIG. 4 is a flow diagram of method steps for automatically estimating a repair cost for a vehicle, according to one embodiment of the disclosure.
- FIG. 7 is an example of a color distribution of the input image from FIG. 6 , according to one embodiment of the disclosure.
- FIG. 8 is an example of an edge distribution of the input image from FIG. 6 , according to one embodiment of the disclosure.
- FIG. 39 illustrates a template matching approach to 2D-to-3D alignment, according to one embodiment.
- FIG. 53 illustrates an example of back-face culling, according to one embodiment.
- FIG. 55 illustrates an example of fusing multiple 2D images with heatmaps onto a 3D model.
- FIG. 1 is a block diagram of a system 100 in accordance with certain embodiments of the disclosure.
- the system 100 includes a server or cluster of servers 102 , one or more client devices labeled 104 - 1 through 104 - n , an adjuster computing device 106 , several network connections linking client devices 104 - 1 through 104 - n to server(s) 102 including the network connections labeled 108 - 1 through 108 - m , one or more databases 110 , and a network connection 112 between the server(s) 102 and the adjuster computing device 106 .
- FIG. 2 is a block diagram of basic functional components for a client device 104 according to some aspects of the disclosure.
- the client device 104 includes one or more processors 202 , memory 204 , network interfaces 206 , storage devices 208 , power source 210 , one or more output devices 212 , one or more input devices 214 , and software modules—operating system 216 and a vehicle claims application 218 —stored in memory 204 .
- the software modules are provided as being contained in memory 204 , but in certain embodiments, the software modules are contained in storage devices 208 or a combination of memory 204 and storage devices 208 .
- [ ] denotes the indicator function taking values 0 or 1
- C is the set of pairs of neighboring pixels, and other two scalars are input parameters (determined by experiments).
- the choice of the initial contour is critical, and the active contour technique itself does not specify how to choose an appropriate initial contour. Since the location of the vehicle within the image is not known, one might put the initial contour at or close to the boundary of the photo in order to ensure that the vehicle is always contained within it. However, this often results in other objects being included in the final result of the background removal process.
- FIGS. 9A-9C illustrate a specular reflection removal process, according to one embodiment of the disclosure.
- FIG. 9A illustrates an input image of a damaged vehicle with the background removed.
- FIG. 9B illustrates the low frequency components of a damaged vehicle in FIG. 9A .
- FIG. 9C illustrates a reflection-removed version of the vehicle in FIG. 9A , with low frequency components removed and color corrected to remove saturated pixels.
- the reference image is itself segmented. This can be done easily, since commercial three-dimensional models usually come equipped with segmentation into its component parts.
- the system can calculate an estimated repair cost at step 408 .
- some embodiments provide the damaged parts list to a database of parts and labor costs. Several such databases exist and are already used by auto repair shops and insurance adjustors on a daily basis once a parts list is identified.
- weight vectors for each neuron in each layer are the ones adjusted during training. The rest of the weight vectors, once chosen, remain fixed.
- Table 1 below provides an examples of the number of parameters of used in one implementation for detection of damage to the front bumper:
- Data augmentation can include offline augmentation or online augmentation to modify the model training data based on existing images.
- offline augmentation corresponds to increasing the volume of model training data before training a model.
- online augmentation corresponds to modifying the model training data when training so that the model is trained on the slightly different and randomly modified original data. Online augmentation does not increase the volume of data used for training, but rather reduces the chance of over-fitting the model on the original data.
- online and offline augmentation methods can be performed independently.
- a server such as server 102 receives one or more images of a damaged vehicle from a client device.
- the images can be taken by, for example, a car insurance customer, an insurance agent, a police offer, or anyone else.
- Pre-processing may include one or more of the following portions: (A) vehicle exterior classification, (B) car pose estimation, and (C) car instance segmentation.
- Instance segmentation includes identifying the outline of the object of interest (e.g., the car) and removing all other parts of the image (e.g., the background and other cars that are not of interest).
- Performing instance segmentation during the model training phase provides a better training set for the classification network at run-time, as well as generates the input for 2D-to-3D alignment algorithm (described in more detail below).
- Accurate instance-level segmentation is a challenging problem, as many leading segmentation methods are unaware of individual object instances.
- Embodiments of the disclosure provides a novel instance-aware segmentation approach using Multi-task Network Cascades (MNC) enhanced using Structured Random Forest (SRF) with edge-map based potentials.
- MNC Multi-task Network Cascades
- SRF Structured Random Forest
- FIG. 33 illustrates example results of performing MNC to segment images of vehicles.
- the images in top row are original images, the second row indicates different instances segmented by MNC.
- MNC performs well at differentiating the instances of different vehicles (shown in shading in the second row of FIG. 33 ).
- the segmentation does not well preserve the precise boundary of each instance. This phenomenon is expected as the masks of MNC are obtained on top of low-resolution deep feature maps.
- E is the unary energy summing over all superpixels. Embodiments of the disclosure calculate this term using the segmentation result from MNC.
- U(x) is defined by the percentage of the area of the superpixels x being segmented as object.
- Step 2 Add the cost.
- a cost function so the confidence score can reflect throughout the model (in the form of gradient.)
- one embodiment of the disclosure provides for a modified Grad-CAM algorithm, which adds a rescale layer after step 5.
- the template matching approach includes two successive stages: initial template matching and refining matching with contour, as shown in FIG. 39 .
- a pre-trained deep neural network is used to extract features from both a 3D model and the image to be aligned. Then, the cosine similarity is measured between rendered and real images.
- the algorithm pipeline is shown in FIG. 40 .
- the underlying camera matrix can be solved via ordinary least-square optimization.
- Algorithm 1 Proposed algorithm for aligning 2D image with 3D CAD model
- Input Vehicle image with background removal I v , 3D CAD model M. and initial camera matrix P 0
- Output Optimal camera matrix P for 3D-to-2D alignment
- Compute posterior probabilities ⁇ p i ⁇ i 1 N i using (6.2): 9
- an encoder-decoder shaped neural network for heatmaps prediction.
- the encoder part of this network is composed of a series of convolutional layers and intermediate max pooling layers.
- the output is a down-sampled feature map extracted from the input image.
- an up-sampling decoder network Following the down-sampling encoder network, there is an up-sampling decoder network.
- a series of transposed convolutional layers, also called deconvolutional layers, are applied to up sample the feature maps.
- Embodiments of the disclosure call the layer that connects the encoder decoder networks the bottleneck layer, because it has the smallest input and output size.
- Several skip connection layers are bridged between the encoder network and the decoder network, to merge the spatially rich information from low-level features in the encoder network, with the high-level object knowledge in the decoder network.
- a second learning task can be added to the existing model in some embodiments.
- the model learns to predict the visibility status of each anchor points, by formulating it as multi-label binary classification. From the bottleneck layer, the model branches out a series of cascaded fully connected layers to predict the visibility status.
- the regression task can use the least-square-error between the predicted heatmaps and the ground-truth heatmaps as loss function.
- the ground-truth visibility status can be used to mask out the losses coming from the invisible anchor points, so they will not present in the final loss function.
- the predicted visibility can be used determining which heatmaps to output.
- J ⁇ 1 [J ( X 0 ;p ) . . . J ( X i ;p ) . . . ]
- embodiments of the disclosure are able to enhance the damage assessment, for example, in case the heatmap for one image is not satisfactory.
- embodiments of the disclosure utilize the 3D model of the vehicle, and map each heatmap into a common 3D space, leading to a 3D version of a heatmap that is convenient for damage appraisal.
- image fusion can be thought of “wrapping” the heatmaps of the 2D images onto the 3D model.
- embodiments of the disclosure map the heatmap intensity of one single view to the 3D model. This process can be repeated for each image separately, and the results are summed together. Alternatively, if multiple heatmaps correspond to the same vertex, the maximum value in each set can be selected as the final per-vertex heatmap intensity. Still further embodiments may use the mean value as the final per-vertex heatmap intensity if multiple heatmaps correspond to the same vertex.
- FIG. 55 illustrates an example of fusing multiple 2D images with heatmaps onto a 3D model.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Strategic Management (AREA)
- Development Economics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- Technology Law (AREA)
- Tourism & Hospitality (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Medical Informatics (AREA)
- Computational Mathematics (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Probability & Statistics with Applications (AREA)
Abstract
Description
- The present application is a continuation-in-part (CIP) of U.S. patent application Ser. No. 15/092,480, filed on Apr. 6, 2016, which is hereby incorporated by reference in its entirety.
- Currently, after a vehicle has been damaged in a road accident or otherwise, the vehicle is taken by the owner or a tow company to an auto repair shop for inspection. Inspection of the vehicle by a mechanic at the auto repair shop is required in order to assess which parts of the vehicle need to be repaired or replaced. An estimate is then generated based on the inspection. In some cases, when an insurance claim is filed, the estimate is forwarded to an insurance company to approve the repairs before the repairs are made to the vehicle.
- From end-to-end, the process of vehicle inspection, estimate generation, claim approval, and vehicle repair can be long and complex, involving several parties including at least a customer, an auto repair shop, and a claim adjustor.
- Accordingly, there is a need in the art for an improved system that overcomes some of the drawbacks and limitations of conventional approaches.
- Embodiments of the disclosure provide a method, computer-readable storage medium, and device for: receiving, at a server computing device over an electronic network, one or more images of a damaged vehicle from a client computing device; performing computerized image processing based on the one or more images to generate one or more damage detection images, wherein each damage detection image is a two-dimensional (2D) image that includes indications of areas of damage to the vehicle in the damage detection image; mapping the one or more damage detection images to a three-dimensional (3D) model of the vehicle to generate a damaged 3D model that indicates area of the vehicle that are damaged; and, calculating an estimated repair cost for the vehicle based on the damaged 3D model.
-
FIG. 1 is a block diagram illustrating a system in accordance with some example embodiments of the disclosure. -
FIG. 2 is a block diagram illustrating components of a client device from the system illustrated inFIG. 1 according to some embodiments of the disclosure. -
FIG. 3 is a block diagram illustrating components for a server from the system illustrated inFIG. 1 according to some embodiments of the disclosure. -
FIG. 4 is a flow diagram of method steps for automatically estimating a repair cost for a vehicle, according to one embodiment of the disclosure. -
FIG. 5 is a flow diagram of method steps for performing image processing on one or more images to detect external damage of the vehicle, according to one embodiment of the disclosure. -
FIG. 6 is an example of an input image of a damaged vehicle, according to one embodiment of the disclosure. -
FIG. 7 is an example of a color distribution of the input image fromFIG. 6 , according to one embodiment of the disclosure. -
FIG. 8 is an example of an edge distribution of the input image fromFIG. 6 , according to one embodiment of the disclosure. -
FIGS. 9A-9C illustrate a specular reflection removal process, according to one embodiment of the disclosure. -
FIG. 10A is an example of a reference image of vehicle, according to one embodiment of the disclosure. -
FIG. 10B is an example of an input image of damage to a vehicle, according to one embodiment of the disclosure. -
FIG. 11 is a conceptual diagram illustrating comparing a window from a reference image to a corresponding window of an input image where no damage is detected, according to one embodiment of the disclosure. -
FIG. 12 is a conceptual diagram illustrating comparing a window from a reference image to a corresponding window of an input image where damage is detected, according to one embodiment of the disclosure. -
FIG. 13 is a conceptual diagram illustrating the various windows in an input image where damage is detected, according to one embodiment of the disclosure. -
FIGS. 14-21 are screenshots of example interface screens of a vehicle claims application on a client device, according to various embodiments of the disclosure. -
FIG. 22 is a screenshot of an example interface screen of an adjuster computing device connected via a communications interface to a server configured to automatically estimate repair costs, according to one embodiment of the disclosure. -
FIG. 23 is a flow diagram of method steps for a vehicle claims application to prompt a client device to capture images of a damaged vehicle, according to one embodiment of the disclosure. -
FIG. 24 is a block diagram illustrating a multi-stage design of a machine learning system, according to one embodiment. -
FIG. 25 is a block diagram illustrating implementation of Convolutional Neural Networks (CNNs) to detect vehicle damage, according to one embodiment. -
FIG. 26 is a flow diagram of method steps for estimating vehicle damage from images of a damaged vehicle using a machine learning algorithm, according to one embodiment. -
FIG. 27 is a flow diagram of method steps for performing image processing on one or more images to detect external damage of the vehicle, according to one embodiment of the disclosure. -
FIG. 28 illustrates a vehicle can be segmented into twenty-four (24) external parts, according to one embodiment. -
FIG. 29 illustrates an example of an interface of a damage annotation tool. -
FIG. 30 illustrates an example of eight pre-defined poses of a vehicle. -
FIG. 31 illustrates which external body parts are visible in each of the eight pre-defined poses, according to one implementation. -
FIG. 32 illustrates an example overview of an MNC (Multi-task Network Cascades) model, according to one embodiment. -
FIG. 33 illustrates some example results of performing MNC to segment images of vehicles. -
FIG. 34 illustrates an example structured edge detection algorithm. -
FIG. 35 illustrates a safe mask and an ignored mask generated from initial segmentation of MNC, according to one embodiment. -
FIG. 36 is an example of a modified VGG 19 model used to detect damage of vehicle parts, according to one embodiment. -
FIG. 37 illustrates a general Grad-CAM algorithm, according to one embodiment. -
FIG. 38 illustrates results of the Grad-CAM algorithm and modified Grad-CAM algorithm for detecting localized damage in images, according to some implementations. -
FIG. 39 illustrates a template matching approach to 2D-to-3D alignment, according to one embodiment. -
FIG. 40 illustrates template matching pose alignment with CNN features, according to one embodiment. -
FIG. 41 illustrates parameters of distance to camera center, elevation angle, azimuth angle, and yaw angle for an example 3D car model. -
FIG. 42 illustrates projecting the 3D model onto an image of a vehicle, according to one embodiment. -
FIG. 43 illustrates pose refinement, according to one embodiment. -
FIG. 44 illustrates results of pose refinement, according to one embodiment. -
FIG. 45 illustrates a set of 3D anchor points on the surface of the 3D model of a vehicle, according to one embodiment. -
FIG. 46 illustrates pairwise 3D-to-2D correspondence between a 3D pose that maps a 3D model onto a 2D image plane, according to one embodiment. -
FIG. 47 illustrates heatmaps for anchor points, according to one embodiment. -
FIG. 48 illustrates rendering synthetic images and determining anchor points for the synthetic images, according to one embodiment. -
FIG. 49 illustrates a camera matrix that maps anchor points of a 3D model to corresponding 2D projections in an image, according to one embodiment. -
FIG. 50 illustrates an example 3D model of car hood. -
FIG. 51 illustrates an example of a segmented 3D model. -
FIG. 52 illustrates the effect of directly projecting the 3D car model onto a 2D image, according to one embodiment. -
FIG. 53 illustrates an example of back-face culling, according to one embodiment. -
FIG. 54 illustrates a heatmap of intensity values mapped onto a visible region of a 3D model, according to one embodiment. -
FIG. 55 illustrates an example of fusing multiple 2D images with heatmaps onto a 3D model. - Embodiments of the disclosure provide systems and methods that apply computer vision and image processing to images of a damaged vehicle to determine which parts of the vehicle are damaged and estimate the cost of repair or replacement, thus automating the damage assessment and cost appraisal process. Additionally, in some embodiments, the server computing device may classify the loss as a total, medium, or small loss.
- The disclosed automatic vehicle damage assessment system is a software system that uses captured images of a damaged vehicle along with auxiliary information available from other sources to assess the damage and, optionally, to provide an appraisal of damage and estimate of repair costs. In some embodiments, the captured images comprise one or more still images of the damaged vehicle and damaged areas. The auxiliary data includes the vehicle's make, model, and year. In other embodiments, the captured images include not only still images, but also video, LIDAR imagery, and/or imagery from other modalities. In some embodiments, the auxiliary information includes additional information available from insurance and vehicle registration records, publicly available information for the vehicle make and model, vehicle data from on-board sensors and installed devices, as well as information regarding the state of the imaging device at the time of image capture, including location information (e.g., GPS coordinates), orientation information (e.g., from gyroscopic sensors), and settings, among others.
- The automatic vehicle damage assessment system is a first-of-its-kind system that leverages state-of-the-art computer vision and machine learning technologies to partially or fully automate the auto claims submission and settlement process, thereby introducing efficiencies in auto insurance claims processing. The system can be expanded to additional sensors and information sources as these become available on smartphone devices including, for instance, stereo/depth sensing modalities. Additionally, in some embodiments, the image capture process can be interactive, with an application (“app”) installed on a smartphone or other client device that guides a user through the process of capturing images of the damaged vehicle.
- In one example implementation, images (e.g., photos or videos) showing damage to the vehicle are captured soon after the damage occurs. The images can be taken with a mobile phone and sent to a server by the vehicle owner or driver over a cellular or wireless network connection, either through a proprietary platform such a mobile application or through a web-based service. In some embodiments, an insurance company field inspector or adjustor visits the vehicle site, captures the requisite images and uploads them to the server, as is currently done in some jurisdictions or countries. In further embodiments, the images can be captured by an auto repair shop to which the vehicle is taken after an accident.
- In embodiments where a mobile phone is used to collect the images, information about the camera's location from the mobile phone GPS system, the camera's orientation from the mobile phone's gyroscope and accelerometer, the time at which the images are taken, and the camera's resolution, image format, and related attributes can also be provided to the server.
- In embodiments where a telematics system is installed in the vehicle, the telematics system can provide information to the server about the vehicle's state at, prior to, and/or after the time of accident, velocity and acceleration profile of the vehicle, states of the airbags and turn signals, and other relevant vehicle state data.
- Certain “metadata” about the vehicle are also available and stored in a database accessible by the server. The metadata includes at least the vehicle make, model, and year. The metadata may optionally include images of the vehicle prior to the occurrence of damage.
- According to embodiments of the disclosure, the assessment of damage and associated repair costs relies upon image processing and machine learning technologies.
- In one embodiment, computer vision techniques are used to first clean the received images of unwanted artifacts, such as background clutter and specular reflections, and then, to find the best matching image of a reference vehicle of the same make/model/year. The system compares the received images with the corresponding reference images along several attributes, e.g., edge distribution, texture, and shape. Using a variety of computer vision techniques, the system recognizes where and how the received images depart from the reference images, and identifies the corresponding part(s) and/or regions on the exterior of the vehicle that are damaged. The reference images can, in some embodiments, be derived from a commercial 3D model of a vehicle of the same make and model, or from images of the same vehicle taken prior to the occurrence of damage in the current claim, e.g., at the time of purchase of the auto policy.
- In another embodiment, the computer vision techniques involve segmenting an image into portions related to the vehicle in question, determining a pose of the vehicle, detecting localized damage in the 2D image of the vehicle, aligning the 2D image to a 3D model of the vehicle, and fusing the localized damage onto the 3D model.
- In some embodiments, a deep learning system (e.g., Convolutional Neural Network) is trained on a large number of images of damaged vehicles and corresponding information about damage, e.g., its extent and location on the vehicle, which are available from an insurance company's auto claims archives, in order to learn to assess damage presented with input images for a new auto claim. Such a pattern learning method can predict damage to both the exterior and interior of the vehicle, as well as the associated repair costs. The assessment of damage to the exterior determined by the image processing system can be used as input to the pattern learning system in order to supplement and refine the damage assessment. The current level of damage can be compared with the level of damage prior to filing of the current claim, as determined using image processing of prior images of the vehicle with the same system.
- A comprehensive damaged parts list is then generated to prepare an estimate of the cost required to repair the vehicle by looking up in a parts database for parts and labor cost. In the absence of such a parts database, the system can be trained to predict the parts and labor cost associated with a damage assessment, since these are also available in the archival data. In some embodiments, the regions and/or areas of damage on the exterior of the vehicle can also be identified.
- In some embodiments, when additional information about the state of the vehicle at the time of the accident as well as of the camera used to take its images is available, the additional information can be used to further refine the system's predictive capabilities. In particular, knowing the location, velocity, and acceleration of the vehicle at the time of accident allows an assessment of the extent of impact to the vehicle during the accident, which allows better estimation of the extent of damage to the exterior and interior of the vehicle. Knowing further whether airbags were deployed during the collision can be useful for determination of the extent of damage, including whether there might be a “total loss” of the vehicle. The orientation of the camera when used to take images of the vehicle, as well as its location and time, can also assist the damage detection system in carrying out various image processing operations, as will become apparent during the discussion below.
- Advantageously, the automatic vehicle damage assessment systems and methods provided herein allow an insurance company to increase its efficiency of auto claims settlement processes. For example, automatic determination of “small value” claims can be settled rapidly without requiring time and effort on the part of adjustors to adjudicate. Automatic determination of “total loss” claims can also lead to early settlement of the claim, resulting in substantial savings in vehicle storage costs. Automatic verification of the damage appraisals sent by auto repair shops can supplant manual inspection of appraisals by adjustors and, in many cases, lead to efficiencies in adjustor involvement. Data aggregated across multiple claims and repair shops can also help identify misleading appraisals and recurrent fraudulent activity by repair shops. Early notification of the nature of damage can be sent to partner repair shops, allowing them to schedule the resources needed for repair early and more efficiently, reducing customer wait times, and thereby, rental vehicle costs.
- Also, customer satisfaction is enhanced in multiple ways. First, the system can rapidly identify the claims that have a small amount of damage and the claims that have such severe damage that the vehicle can not be repaired and is a “total loss.” In at least these two cases, the customer can be sent a settlement check almost immediately upon filing of the claim, with minimal involvement of human adjustors. In other cases, where the damage falls between the two extremes and the vehicle has to be taken to an auto repair shop, appraisal of the damage by the shop can be automatically checked by the system, leading to detection of potentially fraudulent claims, again with minimal requirement of a human adjustors' time and effort.
- Turning now to the figures,
FIG. 1 is a block diagram of asystem 100 in accordance with certain embodiments of the disclosure. Thesystem 100 includes a server or cluster ofservers 102, one or more client devices labeled 104-1 through 104-n, anadjuster computing device 106, several network connections linking client devices 104-1 through 104-n to server(s) 102 including the network connections labeled 108-1 through 108-m, one ormore databases 110, and anetwork connection 112 between the server(s) 102 and theadjuster computing device 106. - The client device or plurality of
client devices 104 and theadjuster computing device 106 can be any type of communication devices that support network communication, including a telephone, a mobile phone, a smart phone, a personal computer, a laptop computer, a smart watch, a personal digital assistant (PDA), a wearable or embedded digital device(s), a network-connected vehicle, etc. In some embodiments, theclient devices 104 andadjuster computing device 106 can support multiple types of networks. For example, theclient devices 104 and theadjuster computing device 106 may have wired or wireless network connectivity using IP (Internet Protocol) or may have mobile network connectivity allowing over cellular and data networks. - The
various networks network 108 comprises wireless and/or wired networks.Networks 108 link theserver 102 and theclient devices 104.Networks 108 include infrastructure that support the links necessary for data communication between at least oneclient device 104 andserver 102.Networks 108 may include a cell tower, base station, and switching network. - As described in greater detail herein,
client devices 104 are used to capture one or more images of a damaged vehicle. The images are transmitted over anetwork connection 108 to aserver 102. Theserver 102 processes the images to estimate damage and repair costs. The estimates are transmitted overnetwork connection 112 to the adjustcomputer device 106 for approval or adjustment. -
FIG. 2 is a block diagram of basic functional components for aclient device 104 according to some aspects of the disclosure. In the illustrated embodiment ofFIG. 2 , theclient device 104 includes one ormore processors 202,memory 204, network interfaces 206,storage devices 208,power source 210, one ormore output devices 212, one ormore input devices 214, and software modules—operatingsystem 216 and a vehicle claimsapplication 218—stored inmemory 204. The software modules are provided as being contained inmemory 204, but in certain embodiments, the software modules are contained instorage devices 208 or a combination ofmemory 204 andstorage devices 208. Each of the components including theprocessor 202,memory 204, network interfaces 206,storage devices 208,power source 210,output devices 212,input devices 214,operating system 216, thenetwork monitor 218, and the data collector 220 is interconnected physically, communicatively, and/or operatively for inter-component communications. - As illustrated,
processor 202 is configured to implement functionality and/or process instructions for execution withinclient device 104. For example,processor 202 executes instructions stored inmemory 204 or instructions stored on astorage device 208.Memory 204, which may be a non-transient, computer-readable storage medium, is configured to store information withinclient device 104 during operation. In some embodiments,memory 204 includes a temporary memory, an area for information not to be maintained when theclient device 104 is turned off. Examples of such temporary memory include volatile memories such as random access memories (RAM), dynamic random access memories (DRAM), and static random access memories (SRAM).Memory 204 also maintains program instructions for execution by theprocessor 202. -
Storage device 208 also includes one or more non-transient computer-readable storage media. Thestorage device 208 is generally configured to store larger amounts of information thanmemory 204. Thestorage device 208 may further be configured for long-term storage of information. In some embodiments, thestorage device 208 includes non-volatile storage elements. Non-limiting examples of non-volatile storage elements include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. -
Client device 104 usesnetwork interface 206 to communicate with external devices or server(s) 102 via one or more networks 108 (seeFIG. 1 ), and other types of networks through which a communication with theclient device 104 may be established.Network interface 206 may be a network interface card, such as an Ethernet card, an optical transceiver, a radio frequency transceiver, or any other type of device that can send and receive information. Other non-limiting examples of network interfaces include Bluetooth®, 3G and Wi-Fi radios in client computing devices, and Universal Serial Bus (USB). -
Client device 104 includes one ormore power sources 210 to provide power to the device. Non-limiting examples ofpower source 210 include single-use power sources, rechargeable power sources, and/or power sources developed from nickel-cadmium, lithium-ion, or other suitable material. - One or
more output devices 212 are also included inclient device 104.Output devices 212 are configured to provide output to a user using tactile, audio, and/or video stimuli.Output device 212 may include a display screen (part of the presence-sensitive screen), a sound card, a video graphics adapter card, or any other type of device for converting a signal into an appropriate form understandable to humans or machines. Additional examples ofoutput device 212 include a speaker such as headphones, a cathode ray tube (CRT) monitor, a liquid crystal display (LCD), or any other type of device that can generate intelligible output to a user. - The
client device 104 includes one ormore input devices 214.Input devices 214 are configured to receive input from a user or a surrounding environment of the user through tactile, audio, and/or video feedback. Non-limiting examples ofinput device 214 include a photo and video camera, presence-sensitive screen, a mouse, a keyboard, a voice responsive system, microphone or any other type of input device. In some examples, a presence-sensitive screen includes a touch-sensitive screen. - The
client device 104 includes anoperating system 216. Theoperating system 216 controls operations of the components of theclient device 104. For example, theoperating system 216 facilitates the interaction of the processor(s) 202,memory 204,network interface 206, storage device(s) 208,input device 214,output device 212, andpower source 210. - As described in greater detail herein, the
client device 104 uses vehicle claimsapplication 218 to capture one or more images of a damaged vehicle. In some embodiments, the vehicle claimsapplication 218 may guide a user of theclient device 104 as to which views should be captured. In some embodiments, the vehicle claimsapplication 218 may interface with and receive inputs from a GPS transceiver and/or accelerometer. - Server(s) 102 is at least one computing machine that can automatically calculate an estimate for vehicle repair costs based on images provided from a
client device 104. Theserver 102 has access to one ormore databases 110 and other facilities that enable the features described herein. - According to certain embodiments, similar elements shown in
FIG. 2 to be included in theclient device 104 can also be included in theadjuster computing device 106. Theadjuster computing device 106 may further include software stored in a memory and executed by a processor to review and adjust repair cost estimates generated by theserver 102. - Turning to
FIG. 3 , a block diagram is shown illustrating components for aserver 102, according to certain aspects of the disclosure.Server 102 includes one ormore processors 302,memory 304, network interface(s) 306, storage device(s) 308, and software modules—image processing engine 310,damage estimation engine 312, and database query and editengine 314—stored inmemory 304. The software modules are provided as being stored inmemory 304, but in certain embodiments, the software modules are stored instorage devices 308 or a combination ofmemory 304 andstorage devices 308. In certain embodiments, each of the components including the processor(s) 302,memory 304, network interface(s) 306, storage device(s) 308,media manager 310,connection service router 312,data organizer 314, and database editor 316 are interconnected physically, communicatively, and/or operatively for inter-component communications. - Processor(s) 302, analogous to processor(s) 202 in
client device 104, is configured to implement functionality and/or process instructions for execution within theserver 102. For example, processor(s) 302 executes instructions stored inmemory 304 or instructions stored onstorage devices 308.Memory 304, which may be a non-transient, computer-readable storage medium, is configured to store information withinserver 102 during operation. In some embodiments,memory 304 includes a temporary memory, i.e., an area for information not to be maintained when theserver 102 is turned off. Examples of such temporary memory include volatile memories such as random access memories (RAM), dynamic random access memories (DRAM), and static random access memories (SRAM).Memory 304 also maintains program instructions for execution by processor(s) 302. -
Server 102 uses network interface(s) 306 to communicate with external devices via one or more networks depicted asnetwork 108 andnetwork 112 inFIG. 1 . Such networks may also include one or more wireless networks, wired networks, fiber optics networks, and other types of networks through which communication between theserver 102 and an external device may be established. Network interface(s) 306 may be a network interface card, such as an Ethernet card, an optical transceiver, a radio frequency transceiver, or any other type of device that can send and receive information. -
Storage devices 308 inserver 102 also include one or more non-transient computer-readable storage media.Storage devices 308 are generally configured to store larger amounts of information thanmemory 304.Storage devices 308 may further be configured for long-term storage of information. In some examples,storage devices 304 include non-volatile storage elements. Non-limiting examples of non-volatile storage elements include magnetic hard discs, optical discs, floppy discs, flash memories, resistive memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. -
Server 102 further includes instructions that implement animage processing engine 310 that receives images of a damaged vehicle from one ormore client devices 104 and performs image processing on the images.Server 102 further includes instructions that implement adamage estimation engine 312 that receives the images processed by theimage processing engine 310 and, in conjunction with a database query and editengine 314 that has access to adatabase 110 storing parts and labor costs, calculates an estimate for repair or replacement of the damaged vehicle. -
FIG. 4 is a flow diagram of method steps for automatically estimating a repair cost for a vehicle, according to one embodiment of the disclosure. As shown, themethod 400 begins atstep 402, where a server, such asserver 102, receives one or more images of a damaged vehicle from a client device. In some embodiments, the images may include additional metadata, such as GPS location. - At
step 404, the server performs image processing on the one or more images to detect external damage of the vehicle. In one embodiment, as described in greater detail inFIG. 5 , performing image processing on the one or more images includes: image cleaning to remove artifacts such as background and specular reflection, image alignment to an undamaged version of the vehicle, image segmentation into vehicle parts, and damage assessment, including edge distribution, texture comparison, and spatial correlation detection. Another embodiment for performing image processing on the one or more images to detect external damage of the vehicle is described inFIG. 27 . - In some embodiments, if the camera's position and orientation are known for a given image, this information can help with the image alignment step by providing a rough estimation of the two-dimensional projection required to produce the reference image. In some embodiments, if an outline of the vehicle or the part whose image is intended to be taken is placed within the camera view for the image taker to align the image to, then the accuracy and efficiency of the background removal procedure can be substantially improved. In some embodiments, if the state of the vehicle just prior to and during the accident can be obtained from a telematics system, then a dynamic model of the vehicle movement can be constructed, the forces each part of the vehicle is subject during any impact estimated, and therefore, the amount of its distortion including displacement in depth assessed.
- At
step 406, the server infers internal damage to the vehicle from detected external damage. Once the externally damaged parts are identified, the server can look up in a database which internal parts are also likely to be replaced based on the set of damaged external parts. This inference can be based on historical models for which internal parts needed to be replaced given certain external damage in prior repairs. - At
step 408, the server calculates an estimated repair cost for the vehicle based on the detected external damage and inferred internal damage. The server accesses one or more databases of parts and labor cost for each external and internal part that is estimated to need repair or replacement. The estimate can be provided to an insurance claim adjuster for review, adjustment, and approval. -
FIG. 5 is a flow diagram of method steps for performing image processing on one or more images to detect external damage of the vehicle, according to one embodiment of the disclosure. In some embodiments, the steps of the method shown inFIG. 5 provided an implementation ofstep 404 fromFIG. 4 . - As shown in
FIG. 5 , the method begins atstep 502, where a server pedal′ is image cleaning to remove background and specular reflection. Asstep 504, the server performs image alignment to align the image to a reference image. Asstep 506, the server performs image segmentation to divide the vehicle into vehicle parts. Asstep 508, the server performs damage assessment based on (a) edge distribution, (b) texture, and (c) spatial correlation. - Each of the one or more images provided to the server from the client device is processed separately according to the method shown in
FIG. 5 . A detailed explanation of performing each ofsteps - At step 502 (i.e., image cleaning), each image is cleaned to remove background and specular reflections due to incident light.
- In a first embodiment of implementing
step 502, background removal can be performed with image segmentation using Conditional Random Fields (CRF) realized as Recurrent Neural Networks (RNN). - In the technique, the image is modeled as a conditional random field. Each pixel in the image is regarded as a node in a mathematical graph. Two nodes are connected by an edge in the graph if their corresponding pixels are neighbors. Each node is assigned a binary label according to whether the corresponding pixel is deemed to belong to the foreground (i.e., the vehicle) or the background. The binary label can be taken to be 1 for the foreground and −1 for the background. Once all of the pixels in the image have been assigned a binary label properly, the pixels labeled as background can be removed achieving segmentation of the background.
- In order to find the node binary labels, two functions are used. The value of the function ψu(xi) denotes the “cost” of the node I taking the value xi. The value of the function ψp(xi, xj) denotes the “cost” of the neighboring nodes I and J taking the value xi and xj, respectively. Using these functions, the following energy function for an image X can be defined:
-
- The probability of an image is defined to be e(−E(X)) suitably normalized. The task is to learn the parameters of two functions ψu and ψp from a large database of real images so that their probabilities are maximized, or equivalently, their energies are minimized.
- The unary function ψu can be learned using a convolutional neural network (CNN). The network is repeatedly shown a succession of training images in which each pixel has been correctly labeled as foreground/background. Starting with random weights, the weights are adjusted using a standard backpropagation algorithm in order to predict the labeling correctly.
- The function ψp can be modeled as:
-
- where kG is a Gaussian kernel, ƒi are features derived from the image and μ is a label-compatibility function. ψp can be learned using the following algorithm, in the which the steps can be implemented as a CNN:
-
- In a second embodiment of implementing
step 502, for background removal, an “active contour” technique can be used to produce a curve called a contour that lies as close to the boundary of the vehicle in the image as possible. The contour serves to separate the vehicle from its background. Anything outside the curve is then removed (e.g., by converting that part of image to black or white, depending on the color of the vehicle). - In one embodiment, the active contour technique starts with a user-supplied initial contour (i.e., closed curve) containing the vehicle within the photo and defining an energy function of the contour that takes its minimum value when the contrast in color and intensity across the contour is maximum, which is assumed to be the indicator of the vehicle boundary. For example, the user-supplied initial contour can be provided by an insurance adjuster utilizing a computing device in communication with the server.
- The initial contour is evolved along the gradient of the energy function until the gradient becomes zero, i.e., when the energy function has achieved an extremal value. An energy function E is defined so that its minimum should correspond to a good segmentation of the image into foreground and background:
-
E(α,k,θ,z)=U(α,k.θ.z)+V(α,z). - where the U( ) evaluates the color distribution and V( ) evaluates the edge or gradient distribution, z=(z1, . . . , zn, . . . , zN) is the image thought of as an RGB-valued array, and αϵ{0,1} is the binary segmentation map, with 0 for background and 1 for foreground. For each assignment of values of a to the pixels the corresponding energy can be computed.
- In one embodiment, the color term U is a Gaussian Mixture Model (GMM) defined as follows:
-
- where p( ) is a Gaussian probability distribution and π( ) is the mixture weighting coefficient, so that:
-
- Therefore, the color modeling parameters are:
-
θ={π(α,k),μ(α,k),Σ(α,k), α=0, 1, k=1 . . . K} - In one embodiment, the edge term V is defined as:
-
- where [ ] denotes the indicator
function taking values - In one embodiment, a user, such as a claims adjuster, initializes the process by supplying an initial background for the image. For example, initialize a=0 for pixels in background and a=1 for pixels in foreground. An iterative process is then performed as follows:
-
-
- Assign GMM components to pixels:
-
-
- 2 GMM parameters from data z:
-
-
- 3, Estimate segmentation: use min cut to
-
-
- 4 Repeat from
step 1, until convergence.
- 4 Repeat from
- However, the choice of the initial contour is critical, and the active contour technique itself does not specify how to choose an appropriate initial contour. Since the location of the vehicle within the image is not known, one might put the initial contour at or close to the boundary of the photo in order to ensure that the vehicle is always contained within it. However, this often results in other objects being included in the final result of the background removal process.
- Some embodiments of the disclosure improve upon existing techniques by using a Deformable Part Model (DPM) to obtain the initial contour. DPM is a machine learning model usually used to recognize objects made of moveable parts. At a high level, DPM can be characterized by strong low-level features based on histograms of oriented gradient (HOG) that is globally invariant to illumination and locally invariant to translation and rotation, efficient matching algorithms for deformable part-based models, and discriminative learning with latent variables. After training on a large database of vehicles in various orientations, the DPM learns to put a bounding box around the vehicle in the photo. This bounding box can then serve as the initial contour.
- Even with a much better choice of initial contour, the background removal process is not always perfect due to the presence of damage and specular reflections. For example, sometimes only part of the vehicle is retained. To solve this problem, embodiments of the disclosure provide a solution by first segmenting the image into “super-pixels.” A super-pixel algorithm group pixels into perceptually meaningful atomic regions. Therefore, if parts of the atomic region are missing, embodiments of the disclosure can recover them by checking atomic region integrity. In one implementation, k-means clustering can be used to generate super-pixels. The similarity measurement for pixels is determined by the Euclidean distance in LAB space (i.e., a type of color space).
- In view of the above, embodiments of the disclosure provide novel image processing techniques to achieve excellent performance on background removal.
-
FIG. 6 is an example of an input image of a damaged vehicle, according to one embodiment of the disclosure.FIG. 7 is an example of a color distribution of the input image fromFIG. 6 , according to one embodiment of the disclosure.FIG. 8 is an example of an edge distribution of the input image fromFIG. 6 , according to one embodiment of the disclosure. - In some embodiments, specular reflection removal is also used to remove specular reflections on the metallic surfaces of the vehicle. Reflection removal is performed by a combination of two techniques. In a first technique, embodiments of the disclosure apply a high-pass spatial filter to the image. Applying a high-pass filter assumes that specular reflections are low spatial frequency additive components of the image intensity. The frequency threshold of the filter can be determined empirically.
- In a second technique, embodiments of the disclosure apply a method that examines each pixel of the image. Pixels whose intensity values have reached a maximum in either of the three color channels (i.e., red (R), green (G), and blue (B)) are assumed to be “saturated” due to strong incident light, and are re-assigned color values of nearby pixels that are of the same color, but unsaturated. This technique of finding the appropriate nearest unsaturated pixel is novel relative to conventional approaches. Among the nearest such pixels, embodiments of the disclosure choose the ones that lie on the same part of the vehicle as the saturated pixel in question, which ensures that they have the same true color, and use the mean ratios between the R, G and B values of the unsaturated pixels to correct the RGB values of the saturated pixel because despite considerable lighting variations, the ratios are supposed to remain invariant.
-
FIGS. 9A-9C illustrate a specular reflection removal process, according to one embodiment of the disclosure.FIG. 9A illustrates an input image of a damaged vehicle with the background removed.FIG. 9B illustrates the low frequency components of a damaged vehicle inFIG. 9A .FIG. 9C illustrates a reflection-removed version of the vehicle inFIG. 9A , with low frequency components removed and color corrected to remove saturated pixels. - Referring back to
FIG. 5 , at step 504 (i.e., image alignment), a reference image is found for the same vehicle type that is taken from the same camera position and orientation as the damaged vehicle image. Once the input image is aligned to a reference image, the server is able to overlay the two images on top of each other so that the vehicle boundaries within them more or less coincide. This is called image alignment. - In one embodiment, the server starts with a three-dimensional model of the vehicle and finds a two-dimensional projection of the three-dimensional model that best matches the cleaned image of the damaged vehicle. The match is determined in two stages.
- In a first stage, “mutual information” between the input image and a template is determined. Mutual information is a statistical measure of similarity of the spatial distributions of the normalized intensities in the two images. In order to find the best match, a sequence of “similarity transformations” are applied to the three-dimensional model and mutual information of the resulting two-dimensional projections is computed until the ones with the maximum mutual information is obtained. The top few templates with the highest mutual information with the damaged image are kept. The top one turns out to not necessarily be the correct template because of the inability of mutual information to sometimes distinguish between front/back and left/right sides of the vehicle.
- In a second stage, another statistical measure “cross-correlation” is used to choose among the top few selected templates. Cross-correlation measures different similarity properties of the two images, and therefore, is able to break the tie among the front/back or left/right sides to come up with the correct template.
- According to some embodiments, three-dimensional models of various vehicles can be purchased from commercial providers of three-dimensional renderings of objects, including the vehicle manufacturers themselves. Alternatively, the three-dimensional models can be constructed from a collection of two-dimensional images of the vehicle taken prior to occurrence of damage. In one implementation of constructing the three-dimensional model from two dimensional images, first a number of feature points of a certain type, e.g., scale-invariant feature transform (SIFT) are computed in each two-dimensional image. Next, correspondences between similar feature points across images are established. These correspondences determine the mutual geometrical relationships of the two-dimensional images in three-dimensional space using mathematical formulas. These relationships allow us to “stitch” the two-dimensional images together into a three-dimensional model of the vehicle.
- At step 506 (i.e., image segmentation), the cleaned image of the damaged vehicle is segmented into vehicle parts, i.e., the boundaries of the vehicle parts are determined and drawn. Segmentation is carried out in order to assess damage on a part-by-part basis, which makes for more robust damage assessment.
- First, the reference image is itself segmented. This can be done easily, since commercial three-dimensional models usually come equipped with segmentation into its component parts.
- Next, an attempt is made to locate each part present in the reference image within the damaged input image. The initial position of the part is located by simply overlaying the reference image onto the damaged image and projecting the boundary of the part on to the damaged image. This is then shrunk uniformly in order to arrive at an initial contour, which is then evolved along the gradient of an energy function in a manner analogous to the method of background removal until the energy function reaches its minimum, which is regarded as occurring when the contour coincides with the part boundary, where there is a locally large difference in intensity across the contour. In order to prevent one part from “leaking” into another, some embodiments use the part template to define the zone within which the evolving part in the damaged image must be confined to. Some embodiments also apply consistency checks across different parts found to make sure that they do not overlap or are completely absent.
- In some embodiments, level set methods can be used to perform image segmentation. In level set methods, a contour of interest is embedded as the zero level set of a level-set function (LSF) ϕ, where ϕ is a function of time t. Initially at t=0, some embodiments choose a seed contour inside the object of interest. For segmentation applications, the energy function is an edge-based geometric active model. The function is defined such that its minimum is reached (therefore, stop evolving) as soon as the zero level set touches the object boundary. In one implementation, the energy function is defined as:
-
εε(ϕ)=μ∫Ω p(|∇ϕ≡)dx+λ Ω gδ ε(ϕ)|∇ϕ|dx+α∫ Ω gH ε(−ϕ)dx. - The first term in the energy function ε above is the regularization term. The regularization function is defined as:
-
- Let I be an image on a domain Ω, and the edge indicator function g is defined as:
-
- where Gσ is a Gaussian smoothing kernel. In some embodiments, the Gaussian kernel is replaced with a non-linear filter that is called a bilateral filter. The filter weights depend not only on Euclidean distance of pixels, but also on the radiometric difference, e.g., pixel grayscale intensity. This preserves sharp edges by systematically looping through each pixel and adjusting weights to the adjacent pixels accordingly.
- The second term in the energy function ε above is a line integral of the function g along the zero level set of energy function. The other integral part is defined as:
-
- The third term in the energy function s above is to speed up the evolution. The function is defined as:
-
- The energy function ε is minimized by solving the gradient flow:
-
- At the end of the image segmentation step, each vehicle part present in the image of the damaged vehicle is separately delineated.
- At step 508 (i.e., damage assessment), the segmented image of the damaged vehicle and the corresponding reference image are compared for significant differences that are attributable to damage to the vehicle. The reference image can be the image of the same vehicle prior to occurrence of damage or of a commercial 3D model. In order to localize damage, each image is divided into small rectangular regions called “windows” in such a manner that the window boundaries in the two coincide. Within each window the images are compared for edge distribution, texture, and spatial correlation.
- For edge distribution, embodiments of the disclosure follow the observation that an undamaged image of a vehicle consists primarily of edges (i.e., straight line segments arising from significant and consistent changes in color and intensity) that are regular in structure and orientation, which are disturbed in the portions where damage has occurred. Embodiments of the disclosure first find edges in the two images using a standard edge finding algorithm, and then compute the distributions of the length and orientations of edges in each window. The distance between the distributions within a window is then computed (using entropy or Kullback-Leibler divergence, for example). If a window exceeds a threshold that is empirically determined, the window may contain damage.
- According to one implementation of a method for edge map comparison, the method first computes the edges of each parts using Canny edge detector. Second, the method detects straight lines on the edge maps from all the possible orientations. Then, the method calculates the probability of each orientation having a straight line. Finally, the method checks the entropy difference between template and damage car based on the probability distribution obtained from last step
- Regarding texture comparison, texture is a way to characterize patterns of intensity changes across an image. In an image of a clean vehicle, each part of the vehicle has a specific texture. When the part is damaged, the part's texture often changes also. Embodiments of the disclosure compute measures of texture such as entropy, derived from locally-oriented intensity gradients for both images in each window and take their difference. If the sum of the magnitudes of differences exceeds an empirically established threshold, the window is regarded as possibly containing damage.
- According to one implementation of a method for texture difference detection, first image pairs are transformed to grayscale image. Then, the method computes the co-occurrence matrix for each part. Finally, the method checks the homogeneity difference based on the co-occurrence matrix.
- For image correlation, in one the auto-correlation and cross-correlation difference Metric is computed as follows:
-
Metric=∫−∞ ∞∫−∞ ∞∫−∞ ∞∫−∞ ∞ƒ(x−a,y−b){g(x,y)−ƒ(x,y)}dxdydadb - In another embodiment, another way to capture differences between patterns of intensity in the damaged and reference images is via spatial correlation, or equivalently, spatial frequency. Some embodiments, compute the spatial frequency components present in the two images in each window. Just as with edges and texture, if they differ appreciably, the window is regarded as a candidate for containing damage.
-
FIG. 10A is an example of a reference image of vehicle, according to one embodiment of the disclosure.FIG. 10B is an example of an input image of damage to a vehicle, according to one embodiment of the disclosure. - As described above, the reference image and input image are divided into segments or “windows,” that are compared to one another on the basis of edge distribution, texture, and spatial correlation. These measures of difference between the two images are then combined together for the final determination of damage within each window.
- In some embodiments, if more than one measure contributes to the existence of damage, the system asserts that damage within the window exists. The exact proportion of weight assigned to each measure can be determined empirically through testing on real images. The weights can also be determined through supervised machine learning on auto claims data.
-
FIG. 11 is a conceptual diagram illustrating comparing a window from a reference image to a corresponding window of an input image where no damage is detected, according to one embodiment of the disclosure. As shown, for each of edge distribution, texture, and spatial correlation, the difference between the window from the reference image and the window from the input image does not exceed the respective threshold. -
FIG. 12 is a conceptual diagram illustrating comparing a window from a reference image to a corresponding window of an input image where damage is detected, according to one embodiment of the disclosure. As shown, damage is detected since the threshold different from edge distribution, texture, and spatial correlation exceeds the respective threshold. As described, in some embodiments, if one of the metrics exceeds the threshold, then damage is detected. In other embodiments, two or three metrics exceeding the threshold indicate that damage is detected. -
FIG. 13 is a conceptual diagram illustrating the various windows in an input image where damage is detected, according to one embodiment of the disclosure. Now that the several indicators of damage within each window have been aggregated, for each vehicle part in the image found during the segmentation step, embodiments of the disclosure compute whether the fraction of “damaged” windows to the total number of windows covering the part exceeds a threshold. If it does, the whole part is declared as damaged. Otherwise, it is not damaged. The “damaged” windows are themselves combined together within their outer boundaries, which can be displayed to show the location of damage within each part, as shown inFIG. 13 . The fraction of damaged area can be regarded as an indicator of the severity of damage to the part. - In addition to these “local” measures of damage, some embodiments can also compute the overall shape of each vehicle part in the two images using a shape descriptor, e.g., medial axis, and regard significant difference between the two as further evidence of damage, which can be combined in a weighted manner with the preceding indicators to arrive at the final estimate.
- Referring back to
FIG. 4 , once external damage is detected atstep 404, internal damage can be inferred atstep 406. Since there is no direct evidence of damage to internal parts from images of the damaged vehicle, embodiments of the disclosure infer damage to internal parts from damage to the external parts. In one implementation, pattern mining large amounts of data of past auto claims can be used to infer damage to the internal parts. - Some embodiments take a large number (e.g., on the order of thousands) of auto claims that contains images of the damaged vehicles and the corresponding appraisals of damaged parts, as found by auto repair shops for repair purposes. Taken together, these historical claims provide enough evidence to establish a high degree of correlation between damage visible in the images and the entire list of damaged parts, both internal and external. In one embodiment, a Convolutional Neural Network (CNN) is trained to learn this correlation. A CNN is a type of mathematical device called a neural network that can be gradually tuned to learn the patterns of correlation between its input and output from being presented a large number of exemplars of input/output pairs called training data. CNNs are configured to take into account the local structure of visual images and invariance properties of objects that are present in them. CNNs have been shown to be highly effective at the task of recognition of objects and their features provided there are enough exemplars of all possible types in the data used to train them. Some embodiments train a CNN to output a complete list of damaged parts when presented with the set of images associated to an auto claim. This includes both internal and external parts. The performance of the CNN can be made more robust when it is presented with the output of the external damage detection system described above. The output of the external damage detection system “primes” the CNN with the information about which external parts are more likely to be damaged, and thereby, increases its accuracy and speed of convergence to the solution.
- After both external and internal damaged parts are identified, the system can calculate an estimated repair cost at
step 408. To arrive at the estimated cost of parts and labor needed for repairing the vehicle, some embodiments provide the damaged parts list to a database of parts and labor costs. Several such databases exist and are already used by auto repair shops and insurance adjustors on a daily basis once a parts list is identified. -
FIGS. 14-21 are screenshots of example interface screens of a vehicle claims application on a client device, according to various embodiments of the disclosure. As described, a vehicle claims application, such as vehicle claimsapplication 218 inFIG. 2 , may be used to capture images of a damaged vehicle and upload them to a server for processing. -
FIG. 14 shows an example log-in screen of vehicle claims application. Once the user is authenticated, a home screen may be displayed, as shown inFIG. 15 . Various links can be provided on the home screen to initiate anew claim 1502, reviewcurrent policies 1504, review prior claims (“My Claims”), find nearby repair shops, view emergency contacts, view the user's profile, and view information about an insurance company (“About Us”). Selecting thecurrent policies 1504 link may display policy information, as shown inFIG. 16 . - If the user selects the
new claim 1502 link, the interface inFIG. 17 may be shown. If there are multiple vehicles insured, the user is asked to select which vehicle to which the new claim relates. Once a vehicle is selected, the interface inFIG. 18 may be displayed, where the user is prompted to take photos of the damaged parts of the vehicle. The vehicle claims application may prompt the user for certain photos using a three-dimensional (3d)model 1802, aparts list 1804, andvehicle views 1806. - If the user selects to be prompted by a
3d model 1802, the interface inFIG. 19A may be displayed. A 3d model of the user's vehicle is displayed and the user is prompted to tap on the portion of the vehicle that is damaged. For example, the user may tap on the hood of the vehicle, which causes an interface such as the one shown inFIG. 19B to be displayed. If the user selects “Yes” in the prompt inFIG. 19B , the interface inFIG. 19C may be displayed. InFIG. 19C , anoutline 1902 is displayed for the hood of the vehicle superimposed on a live camera view from the client device. The user can then position the camera of the client device so that the hood of the car aligns with theoutline 1902. Once the hood of the car aligns with theoutline 1902, a photo is captured, either automatically by the camera or manually by the user selecting a capture button. The user can be prompted in this manner to capture photos of all damaged parts using a 3d model of the vehicle. - If instead the user selects to be prompted by a
parts list 1804, the interface inFIG. 20A may be displayed. The user is first prompted to select general section of the vehicle that sustained damage. Suppose the user select “Front” from the interface shown inFIG. 20A , which causes the interface shown inFIG. 20B to be displayed. The user is then prompted to select a more specific section or part of the vehicle that sustained damage. Once the user makes a selection, an outline for that part is displayed (similar to theoutline 1902 inFIG. 19C ), and the client device proceeds to capture the requisite photo. - If instead the user selects to be prompted by
vehicle views 1806, the interface inFIG. 21 may be displayed. The user is prompted to capture photos of eight views of the vehicle, for example: front-left perspective, front plan, front-right perspective, left plan, right plan, rear-left perspective, rear plan, rear-right perspective. In other implementations, different views may be requested. - Once the user captures the images of the damaged vehicle using the prompts provided by the vehicle claims application, the images are uploaded to a server over a network. The server is then configured to perform image processing operations on the images to identify damaged external parts, infer damaged internal parts, and estimate repair costs, as described above.
-
FIG. 22 is a screenshot of an example interface screen of an adjuster computing device connected via a communications interface to a server configured to automatically estimate repair costs, according to one embodiment of the disclosure. InFIG. 22 , inportion 2200 of the interface, the original images uploaded to the server are shown. In this example, three images have been received by the server. Each of the three images is processes separately. Inportion 2202, the image currently being processed is displayed. Inportion 2204, the image after background and specular reflection removal is shown. Inportion 2206, the clean reference image is shown aligned with the image shown inportion 2204. Using the techniques described herein, the image shown inportion 2204 is compared to the image shown inportion 2206 to identify the external parts that are damaged, from which internal parts are inferred, and total repair costs are estimated. - In some embodiments, in order to assist the adjustors to make decisions quickly and easily using the output of the disclosed automated system, damaged area in each input image are marked in a contrasting color. Also, a label can be put onto the damaged part. Some embodiments then project the images onto the 3D model of the vehicle using the camera angles determined during the alignment process. The 3D model then shows the damage to the vehicle in an integrated manner. The adjustor can rotate and zoon in on the 3D model as desired. When the adjustor clicks on a damaged part, the interface may show all the original images that contain that part on the side, so that the adjustor can easily examine in the original images where the damage was identified.
-
FIG. 23 is a flow diagram of method steps for a vehicle claims application to prompt a client device to capture images of a damaged vehicle, according to one embodiment of the disclosure. At step 2302, the vehicle claims application receives a selection to initiate a new claim. Atstep 2304, the vehicle claims application - At
step 2304, the vehicle claims application receives a selection of a prompting interface for capture of images of damaged vehicle. - If the prompting interface is to capture images using a 3D model of the vehicle, at
step 2306, the vehicle claims application displays a 3D model of the vehicle. Atstep 2308, the vehicle claims application receives a selection of a damaged part on the 3D model. At step 2310, the vehicle claims application displays an outline of the selected part for a user to capture with a camera of the client device. - If the prompting interface is to capture images using a parts list of the vehicle, at
step 2312, the vehicle claims application displays a parts list. Atstep 2314, the vehicle claims application receives a selection of part and, at step 2316, displays an outline of the part for the user to capture with the camera of the client device. - If the prompting interface is to capture images using vehicle views, at
step 2318, the vehicle claims application displays two or more vehicle views and, atstep 2320, displays an outline for each vehicle view to capture with the camera of the client device. - At
step 2322, the vehicle claims application capture images of damage to vehicle using the camera of the client device. Atstep 2324, the vehicle claims application uploads the captured images to a server for automatic estimation of repair costs. - In another implementation of the automatic vehicle damage assessment (AVDA) system, rather than comparing photos of a damaged vehicle to an undamaged version, another embodiment of the disclosure relies upon machine learning methods to learn patterns of vehicle damage from a large number of auto claims in order to predict damage for a new claim. In general, machine learning systems are systems that use “training data” to “learn” to associate their input with a desired output. Learning is done by changing parameters of the system until the system outputs results as close to the desired outputs as possible. Once such a machine system has learned the input-output relationship from the training data, the machine learning system can be used to predict the output upon receiving a new input for which the output may not be known. The larger the training data set and the more representative of the input space, the better the machine learning system performs on the prediction task.
- Some embodiments use machine learning to perform the task of prediction of vehicle damage from an auto claim. Thousands of historical auto claims are stored in one or more databases, such as
database 110 inFIG. 1 , for training and testing of the disclosed system. The database also stored auto claim images and other pieces of information that come with a claim, such as vehicle make, model, color, age, and current market value, for example. The desired output of the disclosed machine learning system is the damage appraisal as prepared by a repair shop consisting of a list of parts that were repaired or replaced and the corresponding costs of repair, both for parts and labor. Another desired output is the determination of the loss type, namely, total loss, medium loss, or small loss, for example. -
FIG. 24 is a block diagram illustrating a multi-stage design of a machine learning system, according to one embodiment. Atstage 2402, claims data for thousands of auto claims is input into the machine learning system. The machine learning system is executed by one or more computing devices, such asservers 102 inFIG. 1 . - At
stage 2404, the machine learning system uses a machine learning method called Convolutional Neural Network (CNN) to detect external damage. A CNN is a type of machine learning method called an artificial neural network. A CNN is specially designed for image inputs based on analogy with the human visual system. A CNN consists of a number of layers of “neurons” or “feature maps,” also called convolution layers, followed by a number of layers called fully connected layers. The output of a feature map is called a feature. In the convolution layers, the CNN extracts the essential aspects of an image in a progressively hierarchical fashion (i.e., from simple to complex) by combinatorially combining features from the previous layer in the next layer through a weighted non-linear function. In the fully connected layers, the CNN then associates the most complex features of the image computed by the last convolution layer with any desired output type, e.g., a damaged parts list, by outputting a non-linear weighted function of the features. The various weights are adjusted during training, by comparing the actual output of the network with the desired output and using a measure of their difference (“loss function”) to calculate the amount of change in weights using the well-known backpropagation algorithm. Additional implementation details of the CNNs of the disclosed machine learning system are described in detail below. - At
stage 2406, the machine learning system predicts damage to the interior parts of the vehicle from the exterior damage assessment output bystage 2404. Some embodiments employ a Markov Random Field (MRF). An MRF defines a joint probability distribution over a number of random variables whose mutual dependence structure is captured by an undirected (mathematical) graph. The graph includes one node for each random variable. If two nodes are connected by an edge, then the corresponding random variables are mutually dependent. The MRF joint distribution can be written as a product of factors, one each of a maximal clique (i.e., a maximal fully connected subgraph) in the graph. Additional implementations details of an MRF of the disclosed machine learning system are described in detail below. - At
stage 2408, after the list of both exterior and interior damaged parts has been prepared, the machine learning system prepares a repair cost appraisal for the vehicle by looking up the damaged parts and labor cost in a database. The damaged parts list can be compared to a list of previously damaged parts prior to the occurrence of the current damage, and a final list of newly damaged parts is determined through subtraction of previously damaged parts. Some embodiments also take into account the geographical location, age of the vehicle, and other factors. - Additionally, some embodiments can classify a claim into categories as a total, medium, or small loss claim by taking the damaged parts list, repair cost estimation, and current age and monetary value of the vehicle as input to a classifier whose output is the loss type which takes the three values—total, medium and small. Any machine learning technique can be used for the classifier, e.g., logistic regression, decision tree, artificial neural network, support vector machines (SVM), and bagging. First, the system is trained on historical claims for which the outcome is known. Once the system parameters have been to achieve a desired degree of accuracy on a test set, the system can be used to perform the loss classification.
- As described, a CNN is a type of machine learning method called an artificial neural network. A CNN consists of a number of layers of “neurons” or “feature maps,” also called convolution layers, followed by a number of layers called fully connected layers. The output of a feature map is called a feature. In the convolution layers, the CNN extracts the essential aspects of an image in a progressively hierarchical fashion (i.e., from simple to complex) by combinatorially combining features from the previous layer in the next layer through a weighted non-linear function. In the fully connected layers, the CNN then associates the most complex features of the image computed by the last convolution layer with any desired output type, e.g., a damaged parts list, by outputting a non-linear weighted function of the features. The various weights are adjusted during training, by comparing the actual output of the network with the desired output and using a measure of their difference (“loss function”) to calculate the amount of change in weights using the well-known backpropagation algorithm.
- A “loss function” quantifies how far a current output of the CNN is from the desired output. The CNNs in some of the disclosed embodiments perform classification tasks. In other words, the desired output is one of several classes (e.g., damaged vs. non-damaged for a vehicle part). The output of the network is interpreted as a probability distribution over the classes. In implementation, the CNN can use a categorical cross-entropy function to measure the loss using the following equation:
-
H(p,q)=Σx p(r)log(q(x)) - where p is a true distribution over classes for a given input x, and q is the output from the CNN for input x. The loss will be small if p and q are close to each other.
In a first example, if we do positive and negative classification, and q=[0.1 0.9] and p=[0 1], then H1=0.1. In a second example, if we do positive and negative classification, and q=[0.9 0.1] and p=[0 1], then H2=2.3. - As described, a CNN is made up of layers. Each layer includes many “nodes” or “neurons” or “feature maps.” Each neuron has a simple task: it transforms its input to its output as a non-linear function, usually a sigmoid or a rectified linear unit, of weighted linear combination of its input. Some embodiments of the disclosure use a rectified linear unit. A CNN has four different types of layers:
-
- 1. “Input layer” that holds the raw pixel values of input images.
- 2. “Convolutional layer” (Cony) that computes its output by taking a small rectangular portion of its input (“window”) and applying the non-linear weighted linear combination.
- 3. “Pooling layer” (Pool) that takes a rectangular portion of its input (“window”) and computes either the maximum or average of the input in that window. Embodiments of the disclosure use the maximum operation. This layer reduces the input sizes by combining many input elements into one.
- 4. “Fully connected layer” (FC), where each neuron in this layer will be connected to all the numbers in the previous volume. The output is a non-linear weighted linear combination of its input.
- The parameters of a CNN are:
-
- Number of layers
- Number of neurons in each layer
- Size of the window in each convolution and pooling layer
- Weight vectors for each neuron in each layer
- The parameters of the non-linearity used (the slope of the rectified linear unit in our case)
- Of these, the weight vectors for each neuron in each layer are the ones adjusted during training. The rest of the weight vectors, once chosen, remain fixed. For example, Table 1 below provides an examples of the number of parameters of used in one implementation for detection of damage to the front bumper:
-
TABLE 1 CNN parameters Representation size Weights Input: [240 × 320 × 3] 0 Conv1-64 neurons [240 × 320 × 64] (5*5*5)*64 = 8000 Pool1 [120 × 160 × 64] 0 Conv2-64 neurons [120 × 160 × 64] (5*5*64)*64 = 102,400 Pool2 [60 × 80 × 64] 0 Conv3-64 neurons [60 × 80 × 64] (5*5*64)*64 = 102,400 Pool3 [30 × 40 × 64] 0 FC1-256 neurons [1 × 1 × 256] 30*40*64*256 = 19,660,800 FC2-256 neurons [1 × 1 × 256] 256*256 = 65,536 FC3-2 neurons [1 × 1 × 2] 256*10 = 2,560 - The weight parameters of a CNN can be adjusted during the training phase using a back-propagation algorithm as follows:
-
initialize the network weights with small random values do for each image x in the training set prediction = compute the output of the network, q(x); // forward pass actual = desired output of the network, p(x); compute loss = H(p, q) = − Σx p(x) log(q(x)) for the batch; compute Δwh = derivative of loss with respect to weight w_h for allweights from hidden layer to output layer; // backward pass subtract Δwh to the current weights to get new weights; until loss on the training set drops below a threshold return the network as trained -
FIG. 25 is a block diagram illustrating implementation of Convolutional Neural Networks (CNNs) to detect vehicle damage, according to one embodiment. In one implementation of the system, labeled images are input to a CNN and the damaged parts list as the desired output. In another implementation, the input-output association problem is broken down into several sub-problems, each of which is easier for a machine learning system than the full problem, as shown inFIG. 15 . -
Claims data 2502 for thousands or millions of auto claims is input into the exteriordamage detection engine 2506. For a given claim for which vehicle damage is to be detected, the claims data is also passed to a vehiclepose classification engine 2504. - The vehicle pose
classification engine 2504 uses a CNN to first predict the pose of the vehicle. The output of this CNN is one of eight (8) pose categories. For vehicles, the 8 categories may correspond to the eight (8) non-overlapping 45-degree sectors around the vehicle, i.e., front, left front corner, left side, back front corner, back, back right corner, right side, and right front sector. The CNN of the vehicle poseclassification engine 2504 can be trained on a large number of auto claim images that have manually been labeled with the appropriate pose category. - In the exterior
damage detection engine 2506, in one implementation, there is one CNN for each of the exterior vehicle parts, trained to predict damage to that part. In one implementation, a vehicle is divided up into twenty-four (24) exterior parts, and thus, twenty-four (24) vehicle part CNNs, including: -
- Pr1=‘Front Bumper’;
- Pr2=‘Back Bumper’;
- Pr3=‘Front Windshield’;
- Pr4=‘Back Windshield’;
- Pr5=‘Hood’;
- Pr6=‘Car Top’;
- Pr7=‘Front Grill’;
- Pr8=‘Left Front Fender’;
- Pr9=‘Left Front Headlight’;
- Pr10=‘Left Side’;
- Pr11=‘Left Back Headlight’;
- Pr12=‘Left Front Window’;
- Pr13=‘Left Back Window’;
- Pr14=‘Left Front Door’;
- Pr15=‘Left Back Door’;
- Pr16=‘Right Front Fender’;
- Pr17=‘Right Front Headlight’;
- Pr18=‘Right Side’;
- Pr19=‘Right Back Headlight’;
- Pr20=‘Right Front Window’;
- Pr21=‘Right Back Window’;
- Pr22=‘Right Front Door’;
- Pr23=‘Right Back Door’; and
- Pr24=‘Trunk’.
- These CNNs can be trained on the auto claims
images 2502, which have been labeled with an indication of damage to each exterior part visible in the images. - After the pose category has been predicted by the vehicle pose
classification engine 2504 for a given input image, the image is presented to each of the external part CNNs of the exteriordamage detection engine 2506. In one implementation, each CNN of the exteriordamage detection engine 2506 corresponds to an external part that is potentially visible from that pose. Thus, a part CNN sees only those images at its input that can have the part present in that post. This reduces the burden on the vehicle part CNNs in the exteriordamage detection engine 2506, while increasing their accuracy since they receive only the images relevant to the given vehicle part CNN. - After all the images in a claim have been presented to the exterior
damage detection engine 2506, the machine learning system has a prediction for damage to each of the exterior parts that we can infer from the collection of images for the claim. - This information is passed from the exterior
damage detection engine 2506 to theinterior damage engine 2508. Theinterior damage engine 2508 predicts damage to the interior parts of the vehicle from the exterior damage assessment output by the exteriordamage detection engine 2506. One implementation employs a Markov Random Field (MRF) in theinterior damage engine 2508. An MRF defines a joint probability distribution over a number of random variables whose mutual dependence structure is captured by an undirected (mathematical) graph. The graph includes one node for each random variable. If two nodes are connected by an edge, the corresponding random variables are mutually dependent. The MRF joint distribution can be written as a product of factors, one each of a maximal clique (a maximal fully connected subgraph) in the graph. - In one implementation, there is one random variable for damage level of each of the vehicle parts. The nodes corresponding to a pair of parts are connected by an edge if they are neighboring parts, since damage to one is likely to result in damage to the other. A probability distribution is defined on these random variables that specifies the probability for each subset of the parts that that subset is damaged while its complement is not damaged.
- From the output of the exterior
damage detection engine 2506, we can assign values to the random variables corresponding to the exterior parts. The values of the random variables corresponding to the interior parts can then inferred by choosing values that result in maximum joint probability for the exterior and interior damaged parts. The inference can be carried out using a belief propagation algorithm. - The joint probability distribution over all the random variables p(y|θ) can first be written as due to the Hammersley-Clifford theorem, as follows:
-
- Here, c is a maximal clique and θc are some parameters associated with the maxical clique. The potential functions ψc are chosen as exponential functions of weighted linear combinations of the parameters θc as:
- In one implementation, ϕc is identity. During training, the parameters θc are adjusted as follows: for any given auto claim, values of the variables θc corresponding to the exterior and interior parts are clamped at their true values. The values of the parameters θc are chosen to then maximize the probability p(y|θ). This is repeated over the entire set of training images until values of θc settle down to more or less fixed values. These final values are taken as the values of the parameters θc for prediction of damage to interior parts.
- The MRF is used to predict damage to interior parts as follows: given a new claim the values of yc corresponding to the exterior parts are fixed at the outputs of the corresponding part CNNs. The values of yc corresponding to interior parts are then chosen to maximize the probability p(y|θ). For any interior parts if yc exceeds a pre-defined threshold, it is regarded as damaged. Otherwise it is regarded as undamaged.
- The external an internal damage estimates are then passed to a
cost estimation engine 2510. Thecost estimation engine 2510 can look up in a database the corresponding cost for repair or replacement of each of the external and internal parts based on make, model, year, and color of the vehicle. Some embodiments also take into account the geographic location of the vehicle, as costs may vary by state or region. - Additionally, some embodiments can classify a claim into categories as a total, medium, or small loss claim by taking the damaged parts list, repair cost estimation, and current age and monetary value of the vehicle as input to a classifier whose output is the loss type which takes the three values—total, medium and small. Any machine learning technique can be used for the classifier, e.g., logistic regression, decision tree, artificial neural network. First, the system is trained on historical claims for which the outcome is known. Once the system parameters have been to achieve a desired degree of accuracy on a test set, the system can be used to perform the loss classification.
-
FIG. 26 is a flow diagram of method steps for estimating vehicle damage from images of a damaged vehicle using a machine learning algorithm, according to one embodiment. As shown, themethod 2600 begins atstep 2602, where a server computing device trains a first Convolutional Neural Network (CNN) to detect pose of a vehicle. Atstep 2604, the server computing device trains a plurality of CNNs to detect damage to a respective plurality ofexternal vehicle parts 2604. Atstep 2606, the server computing device receives a set of images corresponding to a new claim. Atstep 2608, the server computing device executes the first CNN to detect the pose of the vehicle in each of the images in the set of image. Atstep 2610, the server computing device executes the plurality of CNNs to determine which external vehicle parts are damaged. Atstep 2612, the server computing device executes a Markov Random Field (MRF) algorithm to infer damage to internal parts of the vehicle from the damaged external vehicle parts. Atstep 2614, the server computing device estimates a repair cost based on the external and internal damaged parts. Additionally, in some embodiments, atstep 2614, the server computing device may classify the loss as a total, medium, or small loss, as described above. -
FIG. 27 is a flow diagram of method steps for performing image processing on one or more images to detect external damage of the vehicle, according to one embodiment of the disclosure. A first embodiment was presented inFIGS. 4-5 . A second embodiment is presented here inFIG. 27 . In the disclosed second embodiment, a CNN-based deep learning system and 3D geometry methods are assembled together in a hierarchical data processing fashion to accurately localize and measure the damage information available from one or more images. - The method of
FIG. 27 takes damaged vehicle images (2702) and car make and model information (2718) as input, and outputs the damage localization on a 3D model (2716) of the same kind of car make and model. In one implementation, a vehicle can be segmented into twenty-four (24) external body parts, as illustrated inFIG. 28 . The 24 external body metal parts can be target variables of the computations inFIG. 27 , as described in greater detail below. - As described above, in some embodiments, a deep learning system (e.g., Convolutional Neural Network) is trained on a large number of images of damaged vehicles and corresponding information about damage, e.g., its extent and location on the vehicle, in order to learn to assess damage presented with input images. The data used to train the model may be available from an insurance company's auto claims archives. A pattern learning technique can then be used predict damage to both the exterior and interior of the vehicle from input images of a damaged vehicle, as well as the associated repair costs, as described herein.
- In order to achieve better accuracy in training the machine learning models, some embodiments use assisted annotation to annotate each image in the dataset with an annotation tool to perform supervised/semi-supervised machine learning. An example of an interface of the annotation tool is shown in
FIG. 29 . The annotation tool may be software executed by the server computing device. Afirst portion 2902 of the interface includes an interface element for selecting the pose of the vehicle in the image from one of eight (8) pre-defined poses.FIG. 30 illustrates an example of the eight pre-defined poses of a vehicle. Specifically, the eight poses can be (1) left, (2) left rear, (3) rear, (4) right rear, (5) right, (6) right front, (7) front, and (8) left front, as shown inFIG. 30 .FIG. 31 illustrates which external body parts are visible in each of the eight pre-defined poses, according to one implementation. Shading in the table inFIG. 31 illustrates that a given part is visible in a given pose, relative to a camera capturing the image of the vehicle. - A
second portion 2904 of the interface includes interface elements for selecting a damage severity to each of the 24 external body parts of the vehicle in the image. In one implementation, the choices for each body part in thesecond portion 2904 of the interface may include (N) no damage shown, (W) weak damage shown, (S) strong damage shown, or (C) changed parts. Note, some parts that are not visible in the image may be damaged, but N is selected for those parts since the damage is not shown in the image. - In one implementation of the CNN model, each image in the training dataset is tagged with metadata identifying the pose of the vehicle in the image and indications of which parts are visible as damaged in the image and/or the severity of the damage to those parts. An interface tool like the one shown in
FIG. 29 can be used to manually tag each image, in one embodiment. In other embodiments, the tagging can be performed automatically by an image processing system. In such embodiments, the interface tool could be used to verify or edit the automatically created tags. - In some implementations, a non-regularized neural network may learn features and noise equally well, increasing the potential for overfitting. Overfitting is a modeling error that occurs when a function is too closely fit to a limited set of data points. Thus, some embodiments may apply L2 regularization to avoid overfitting. Some embodiments can also leverage data augmentation to limit overfitting, as described herein.
- Data augmentation, as described herein, can include offline augmentation or online augmentation to modify the model training data based on existing images. In one implementation, offline augmentation corresponds to increasing the volume of model training data before training a model. In one implementation, online augmentation corresponds to modifying the model training data when training so that the model is trained on the slightly different and randomly modified original data. Online augmentation does not increase the volume of data used for training, but rather reduces the chance of over-fitting the model on the original data. In various embodiments, online and offline augmentation methods can be performed independently.
- Examples of offline augmentation include, but are not limited to: (a) flipping images horizontally, (b) cropping images with a predefined cropping window, or (c) brightness jittering, which adds an offset value to one or more color channels of an image. For example, if an existing image shows damage to the left side of the vehicle, by performing horizontal flipping, a new image is created that shows damage to the right side of the vehicle. By performing data augmentation on existing images, more images are added to the training dataset.
- Data augmentation can also include online augmentation to add even more images to the training data set. Examples of offline augmentation include, but are not limited to: (a) deforming an image in affine spaces, (b) adding noise that is sampled from a Gaussian distribution, (c) adding noise that is valued with a Gaussian term ƒ′(x,y)=ƒ(x,y)+N(0,σ2) a (i.e., so-called “salt and pepper noise”), (d) cropping away an original image with a pre-define cropping window, or (e) scaling each image by a scaling factor.
- In some implementations, performing data augmentation to add additional images to the training data set can improve the performance of the model by reducing the chances of overfitting. Once the model(s) have been trained, the method of
FIG. 27 can be performed. - Referring back to
FIG. 27 , atstep 2702, a server, such asserver 102, receives one or more images of a damaged vehicle from a client device. The images can be taken by, for example, a car insurance customer, an insurance agent, a police offer, or anyone else. - At
step 2704, the server performs pre-processing on the images. Pre-processing may include one or more of the following portions: (A) vehicle exterior classification, (B) car pose estimation, and (C) car instance segmentation. - In some implementations, the images received can be taken from many possible angles and distances with respect to the car of interest. For example, an image might be a close-up of the damaged area on an external body part, or an image showing the VIN number of the vehicle. In order to efficiently and effectively detect damage to the exterior of a vehicle, images showing the car exterior are used in the method of
FIG. 27 . Thus, in some embodiments, pre-processing is performed to filter out non-exterior images. For example, an image that shows the interior of a vehicle can be filtered out. In some implementations, the system can define requirements for input images to be of the exterior of the vehicle. One embodiment may use a convolutional neural network to determine whether an image is showing exterior of the vehicle or not. The convolutional neural network takes images as input and gives a binary decision on the exterior/non-exterior determination. - Some embodiments determine which external body parts are visible in the input image. To do so, the server first classifies the pose of the car (with respect to the camera) into one of the eight pre-defined poses, as shown in
FIG. 30 . To simplify the pose estimation problem, some embodiments assume the camera is looking at the geometric center of the vehicle. Some embodiments use a convolutional neural network to achieve better pose classification. Once the pose is determined, a table such as the one shown inFIG. 31 can be used to identify which external parts are visible in the image. - Instance segmentation includes identifying the outline of the object of interest (e.g., the car) and removing all other parts of the image (e.g., the background and other cars that are not of interest). Performing instance segmentation during the model training phase provides a better training set for the classification network at run-time, as well as generates the input for 2D-to-3D alignment algorithm (described in more detail below). Accurate instance-level segmentation is a challenging problem, as many leading segmentation methods are unaware of individual object instances. Embodiments of the disclosure provides a novel instance-aware segmentation approach using Multi-task Network Cascades (MNC) enhanced using Structured Random Forest (SRF) with edge-map based potentials.
- Image segmentation is a well-studied problem and recently has achieved significant improvement thanks to the use of deep learning networks. However, embodiments of the present disclosure are interested in instance-aware segmentation due to the fact that multiple vehicles may be displayed in images captured by users. Furthermore, the segmentation results are also expected to be aware of the object boundaries, which are used for precise 2D-to-3D alignment (described later in step 2710).
- Embodiments of the disclosure provide a two-step framework to perform instance-aware, boundary-aware image segmentation. Embodiments of the disclosure first extract the segmentation from the input image using Multi-task Network Cascades (MNC). Then, an edge-map detected from a Structured Random Forest (SRF) is incorporated to enhance the boundaries of each segmented instance of the image, which is implemented as a conditional random field (CRF) where the unary terms are computed from the results of MNC and the pairwise terms are computed from the edge map detection algorithm. Described below are the MNC approach and use of CRF with edge-map based potentials.
- MNC is a CNN-based method for instance-aware image segmentation. Prior CNN methods typically employ a mask proposal method in order to differentiate instances. Such an approach may be slower at the inference step, but also takes no advantage of deep learning features for the mask proposal, which poses a potential bottleneck for segmentation accuracy and scalability.
- MNC is a type of fast R-CNN (Region-based Convolutional Neural Network), which incorporates a pre-processing step directly into the CNN structure. In essence, MNC is a cascaded network where each state is designed specifically for a certain task with a specific task-oriented cost function. In one implementation, three main sub-tasks decomposed from instance-aware segmentation include: (1) Class-agnostic bounding boxes detection, (2) Class-agnostic mask estimation from the bounding boxes, and (3) Mask categorizing. The three main tasks share the same deep feature bank and each takes the input from the immediately previous task. An example overview of an MNC (Multi-task Network Cascades) model is shown in
FIG. 32 . - In one implementation, three cost functions are defined for three stages, including: regressing box-level instances, regression mask-level instances, and categorizing instances, respectively. A regressing box-level instances layer takes regression results of a box-level layer and feature maps to predict a class-agnostic mask for each predicted bounding box. It can be performed as binary logistic regression to the ground truth mask. Instead of using a sliding-window as the method of box-level regression, in some embodiments this stage operates on the predicted bounding boxes. As the predicted bounding boxes can be different in size, a feature pooling method, namely, region-of-interest (ROI) can be used to obtain a fixed-size feature from an arbitrary box. To categorize the instances predicted from the first and second stage, the deep feature for each bounding box is extracted by ROI pooling and then “masked” before being fed into a softmax layer. Additional detail can be found in DAI, et al. “Instance-Aware Semantic Segmentation via Multi-task Network Cascades,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3150-3158 (June 2016), which is incorporated by reference herein.
-
FIG. 33 illustrates example results of performing MNC to segment images of vehicles. The images in top row are original images, the second row indicates different instances segmented by MNC. As shown inFIG. 33 , MNC performs well at differentiating the instances of different vehicles (shown in shading in the second row ofFIG. 33 ). However, the segmentation does not well preserve the precise boundary of each instance. This phenomenon is expected as the masks of MNC are obtained on top of low-resolution deep feature maps. - In order to enhance the segmentation result of MNC by better preserving the boundary of the object of interest (i.e., the vehicle), some embodiments employ an edge-map detected from a Structured Random Forest (SRF). The goal is to incorporate two sources of complementary information, i.e., the object-level segmentation obtained from deep features and the edge-map generated from the patch-based low-level features.
- Structured edge is a trainable edge detection model based on a random forest—i.e., an ensemble model containing multiple decision trees. A decision tree ƒ(x) predicts the label of a sample x by recursively branching left or right down the tree until a leaf node is reached. Each branching can be considered as a sub-classifier that assigns the sample into a smaller group, either left or right. Each leaf node is associated with a label, which will be assigned to the sample reaching the leaf node.
- In one embodiment, training the tree involves finding a good criterion used to split the data at each branch of the tree based on the training data. The good criterion is defined as “an information gain” criterion that encourages the commonality of training samples within each newly formed groups. A splitting criterion such as Gini impurity or entropy can be used, in some implementations.
- For the structured edge detection algorithm, a sample x is the feature vector of a 32×32 image patch (for example) and the label is a 16×16 segmentation mask (for example), as illustrated in
FIG. 34 . As shown inFIG. 34 , given a set of structured labels such as segments, a splitting function (a) is determined. A good split (b) groups similar segments, whereas a bad split (c) does not. In practice, the algorithm clusters the structured labels into two classes (d). Given the class labels, a standard splitting criterion, such as Gini impurity, may be used (e). Additional details can be found in DOLLAR, et al. “Fast Edge Detection Using Structured Forests,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 37, Iss. 8, pp. 1558-1570 (Dec. 4, 2014), which is incorporated by reference herein. - In one implementation of the algorithm for vehicles, the feature vector is computed based on low-level features including: intensity, gradient, and orientation. A sampling mechanism is used so that each tree uses m dimensions of the feature vector, which solves the efficiency issue and enhances the randomness. During testing, the results of all the trees are used to compute a soft-edge map, whose each pixel indicates the probability of the pixel being an edge.
- CRF with MNC and Edge-Map Based Potentials
- Embodiments of the disclosure first generate an over-segmentation into the superpixels of the image and then establish a conditional random field on top of the superpixel graph. Such a CRF models the possibilities of the superpixels being assigned as the object/non-object based on the MNC segmentation and edge-map values. Specifically, a CRF inference tries to minimize the energy function defined by:
-
- E is the unary energy summing over all superpixels. Embodiments of the disclosure calculate this term using the segmentation result from MNC. U(x) is defined by the percentage of the area of the superpixels x being segmented as object.
- E is equal to the pairwise energy summing over all pairs of neighboring superpixels. Embodiments of the disclosure calculate this from the edge-map. P(x, y) is computed using the output of the structured edge detector, i.e, the local segmentation masks. The two terms are weighted by wu and wp, which are controlling parameters balancing the unary and pairwise terms. In one example, the two controlling weight wu and wp are set to two (2) and one (1), respectively.
- In order to make the framework more robust to the object boundary, some embodiments can add a spatial constraint to the unary term. For example, a safe mask and an ignored mask are generated from the initial segmentation of MNC, visualized in
FIG. 35 . - In
FIG. 35 , the left image is the original image, the middle image is the result of MNC segmentation, and the right image shows an overplayed safe mask and ignored mask. - Superpixels located in the safe mask are assigned negative energies so that they are guaranteed to be labeled as part of the object. Likewise, superpixels located in the ignored mask are assigned high energies so that their labels are likely non-object. Using safe mask and ignored mask leaves a narrow area alongside the segmentation boundary to be decided by CRF.
- Referring back to
FIG. 27 , after the server performs pre-processing atstep 2704, the server performs damage detection atstep 2706. Damage detection is performed for each part that is visible in a given image. The outcome ofstep 2706 is an indication, for each part that is visible in the image, whether that part is damaged or not. - Embodiments of the disclosure make use transfer learning to make a binary decision on whether the external part is damaged or not in an image. In one implementation,
VGG 19 can be used as a baseline model, modified with a new classifier.VGG 19 is a known pre-trained CNN baseline model. Other baseline models can also be used, such as resnet-50, resnet-101, inception v-3, and inception v-4. Embodiments of the disclosure observed similar accuracy of these different models on the same test set. Below we will discuss the details in terms of VGG 19 as the baseline model, as an example. -
FIG. 36 is an example of a modifiedVGG 19 model used to detect damage of vehicle parts, according to one embodiment. As shown, a last layer of the network is replaced with a binary classification layer—either two neurons with softmax activation, or one neuron with sigmoid activation. - In one embodiment, with a one neuron sigmoid activation layer, embodiments of the disclosure are able to adjust the threshold to binarize the model output into categorical (i.e., damaged or not damaged). This operation can be used to choose an operating point in real application. Different thresholds result in different performance in terms of TPR (true positive rate—percentage of parts correctly labeled as damaged) and FAR (false acceptance rate—percentage of non-damaged parts labeled as damaged).
- The result of performing
step 2706 is a binary decision as to whether each of the 24 parts is damaged in a 2D image. Atstep 2706, the server performs damage localization to determine where the damage is localized on the 2D image of the vehicle. The goal of this step is to determine which portions of a damaged part are damaged (e.g., a percentage of the part, in terms of area). In some implementations, based on whether the percentage of the part that is damaged is below a threshold, the part may be considered to be repairable. If the percentage of the part that is damaged is above the threshold, the part should be replaced. The threshold can be different for different parts and can be configurable. - In
step 2708, the localized damage on the 2D images can be shown, for example, by drawing a 2D heatmap overlaying the image to localize where on the image the damage is located. One embodiment for performing 2D damage localization is the Grad-CAM algorithm, described in SELVARAJU, et al. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization,” 2017 IEEE International Conference on Computer Vision (ICCV) (Oct. 22-29, 2017), which is incorporated by reference herein. A second embodiment for performing 2D damage localization is a modified Grad-CAM algorithm, also described below. - The general Grad-CAM algorithm workflow is illustrated in
FIG. 37 . Grad-CAM uses a model that has been trained to classify images into pre-defined classes. Using the trained model, the algorithm tries to infer the areas (or pixels) that are mostly responsible for the classification decision. So, we could also call it “model visualization.” In the example shown inFIG. 37 , the steps to highlight the tiger cat in the input images include: -
Step 1. Add a new layer on top of the last layer of a trained classification network. -
ƒ(v softmax)=v onehot cat ·v softmax - Then, what we have is a vector that is all zero instead of cat index. And in cat index, the value is the confidence score from the model.
-
Step 2. Add the cost. We then define a cost function so the confidence score can reflect throughout the model (in the form of gradient.) -
L(x)=sum(x) -
Step 3. Forward the image to this model. -
Step 4. Backpropagation the cost to the output of layer we are interested in (e.g. the output of last convolution layer). Let the output be Aij k (the (i,j) element of k-th feature map) and the corresponding gradient is -
-
Step 5. Calculate the weight by global average pooling: -
-
Step 6. Calculate the CAM: -
- However, in some implementations for localizing damage on vehicles, the gradient vanishing is causing a problem in which wk is so small that it almost cancels out the entire feature mapA. Therefore, one embodiment of the disclosure provides for a modified Grad-CAM algorithm, which adds a rescale layer after
step 5. - Specifically, for
step 5 and following, the workflow for the modified Grad-CAM algorithm is: -
- 5. Calculate the weight by global average pooling:
-
- 6. Rescale it to prevent gradient vanish:
-
w k *=w k/max(w) -
- 7. Calculate the CAM:
-
- The results of the original and modified Grad-CAM algorithm are shown in
FIG. 38 . For each part (e.g., left front fender and trunk), the input image is shown in the left column is the input image, the results of Grad-CAM are shown in the middle column, and the results of modified Grad-CAM are shown in the right column. - Referring back to
FIG. 27 , atstep 2710, the server performs 2D-to-3D alignment to align a 2D image (with localized damage detected at step 2708) to a 3D model. Two different embodiments for aligning a 2D image to a 3D model are described: Approach 1: Template Matching and EM Refinement, and Approach 2: Point Correlation Method. - One embodiment for determining the 2D-to-3D association between a car in the image and a 3D model of the car is to first discretize, or sample, the 3D model and create a loop-up dictionary (i.e., a collection of 2D car model images). Then, look up in the pre-built collection to find the best match to the car in the 2D image. Therefore, we call this approach “template matching.”
- Template Matching Pose Alignment with CNN Features
- The template matching approach includes two successive stages: initial template matching and refining matching with contour, as shown in
FIG. 39 . A pre-trained deep neural network is used to extract features from both a 3D model and the image to be aligned. Then, the cosine similarity is measured between rendered and real images. The algorithm pipeline is shown inFIG. 40 . - In the rendering/sampling step, a discrete set of renderings of a 3D car model is generated from various viewing points by adjusting four parameters: the distance to camera center, elevation angle, azimuth angle, and yaw angle. These parameters are shown in
FIG. 41 for an example 3D car model. Then, each rendered car image is resealed to the same dimension (for example, 256×256 pixels). The following parameter ranges can be selected, as an example implementation: - distance=[400,450,500,550,600,650,700,750];
- elevation=[0-20 degree];
- azimuth=[0-360 degree]; and
- yaw=[−6-6 degree].
- In the next step, the disclosed embodiments use a pre-trained CNN to extract features. The resulting feature is a one-dimensional representation of the 2D image. Therefore, the cosine distance between 2D car image and each one of the rendered templates can be calculated using the equation:
-
- Some embodiments of the disclosure choose the template with the highest matching score as the final alignment, and its corresponding pose as pose estimation result.
- In some embodiments, an EM-based pose refinement approach can be used to fine-tune the alignment result from the above template matching stage. The refinement is achieved by aligning a silhouette (e.g., boundary of 2D projection) of the 3D model with the corresponding image boundary of the target vehicle.
- The goal of pose refinement is to find the camera matrix that correctly projects the 3D model onto the real vehicle image. Given that camera matrix, the silhouette (i.e., boundary of 2D projection on image plane) of the 3D model should be seamlessly overlapped with the boundary of the real vehicle. This is illustrated in
FIG. 42 . - If the correct point-wise correspondence between the silhouette of the 3D model and the real vehicle boundary is known, then the underlying camera matrix can be solved via ordinary least-square optimization.
- However, that kind of correspondence can be difficult to determine. To address this problem, some embodiments use an iterative framework, as shown in
FIG. 43 , which alternates between two steps to approximate the correct solution. The first step registers the boundaries extracted from a real vehicle image and a rendered image, establishing a point-wise correspondence between two boundaries. The second step updates the camera matrix based on the current registration (i.e., correspondence). - The registration step takes two images as inputs, the vehicle image Iv and the rendered image Ir. Ir is rendered from the most up-to-
date 3D pose estimation for Iv (either from the initialization stage, or from last iteration); thus, Iv and Ir should be of similar pose. The vehicle boundary in Iv is extracted from the image segmentation results described in previous sections, while the boundary of the rendered vehicle in Ir is extracted from Ir's alpha channel. Embodiments of the disclosure designate {mi} and {vj} as the 2D boundary points extracted from images Ir and Iv, respectively. - To perform the registration, the disclosed embodiments aim to find a homography Hβ that maps each boundary point mi to a corresponding vj. For each mi, we denote {circumflex over (m)}i as its counterpart after transformation with Hβ:
-
{circumflex over (m)} i =H β m i - Embodiments of the disclosure use di(Hβ) to denote the distance from {circumflex over (m)}i to its nearest vj:
-
- Putting the above two equations together we have:
-
- where D(x, y) denotes the squared Euclidean distance.
- It is noted that some boundary points in Ir, referred to as “outliers,” do not have corresponding boundary points in Iv. The outliers have two main sources, one is that an innate discrepancy exists in the current pose estimation, and as a result some boundary points on Ir are actually invisible in Iv. The other source is the shape distortion due to damage to the vehicle. Therefore, one embodiment may account for the outliers during the registration process. Each distance di is assigned with a posterior probability pi, indicating how likely each point mi is an inlier. Formally, the problem of boundary registration is formulated as finding the correct homography Hβ that minimize the objective function:
-
- where Ni is the number of points on the rendered vehicle's boundary. Embodiments of the disclosure apply the EM algorithm to estimate homography Hβ. The EM algorithm iterates between updating the posterior probability pi based on current registration (“E-step”), and re-estimating the homography parameters (“M-step”) until it converges to a local minimum.
- This boundary registration method generates a homography Hβ that maps each boundary point mi to its corresponding vj, as well as a probability map indicating how likely each mi is an inlier.
- In the camera matrix step of pose refinement, embodiments of the disclosure calculate the camera matrix from the registered image boundaries. Each boundary point mi is associated with a 3D point Mi on the 3D model, as {mi} is the 2D projection of {Mi}. From last step, we know each mi has a corresponding vj on the boundary of the real vehicle. Using {mi} as a bridge, embodiments of the disclosure establish the 3D-to-2D correspondence between 3D points {Mi} and the 2D points {vj}. The above correspondence can be utilized to estimate the camera matrix P as mentioned previously.
- With the probability map for inliers generated from last step, embodiments of the disclosure filter out all the outliers from {mi}. Only those reliable 3D-to-2D correspondences that associated with inliers are kept, denoted as {vk↔Mk, k=1, 2, . . . , Nk, s.t. Mk∈inliers}, NK is the number of inliers. The relationship between each pair vk and Mk can be defined as below with regard to the camera matrix P:
-
sv k =PM k. - With the above equation, given NK correspondences, embodiments of the disclosure can find a linear solution of camera matrix P using direct linear transformation (DLT).
- The linear solution of P obtained by DLT algorithm minimizes an algebraic error that is not geometrically meaningful. Therefore, embodiments of the disclosure can further approximate the optimal solution by minimizing the geometric error. Geometric error is also known as re-projection error, which is defined as the average distance between the re-projected points and the image points. In one implementation, embodiments of the disclosure are solving for a camera matrix P that minimizes the following nonlinear error function:
-
- The nonlinear optimization of this equation can be solved with Levenberg-Marquardt (LM) method. It can be noted that the LM method uses an initialization point to start with. In one implementation, we simply take the estimated camera matrix P from the linear solution for this purpose. The complete algorithm for 3D-to-2D alignment is summarized in
Algorithm 1 below: -
Algorithm 1: Proposed algorithm for aligning 2D image with 3D CAD model Input : Vehicle image with background removal Iv, 3D CAD model M. and initial camera matrix P0 Output : Optimal camera matrix P for 3D-to-2D alignment Image rendering: 1 Create a rendered image Ir using the initial camera matrix P0: Boundary detection: 2 Extract boundary points mi and vj from Ir and Iv, respectively: 3 Extract the 3D coordinates Mi for 2D boundary points mi: Initialization: 4 1 ← 0: P′ ← P0: 5 repeat | Chamfer alignment: 6 | n ← 0: 7 | repeat /*EM iterations*/ 8 | | Compute posterior probabilities {pi}i=1 N i using (6.2):9 | | Estimate homography parameter Hβ using LM method: 10 | | Compute γ using (6.3), λ using (6.4): 11 | | n ← n + 1: 12 | until (n > max iterations): | Camera matrix update: 13 | Select inlier points {mk} ← {mi|pi > 0.8}: 14 | Identify correspondence vj ←→ mk ←→ Mk: 15 | Find linear solution of P based on (3.3): 16 | Find P that minimizes (3.1): 17 until (||P′ − P′−1||2 ≤ ε1) or ( t > max iterations ) //Convergence of parameters: 18 Output current parameter estimate P′ as P : indicates data missing or illegible when filed - The results of adding the pose alignment procedure is shown in
FIG. 44 . The left column shows the results of template matching alignment, and the right side shows improvements in the alignment after adding pose refinement. - A second embodiment for aligning a 2D image to a 3D model is referred to as point correspondence matching. The first embodiment uses a two-stage method: a template matching stage based on rough pose estimation, followed by a contour matching stage based on fine-grained pose alignment. The first embodiment works well under most conditions. However, in a case that the contour of the car is severely deformed due to damage, a complementary approached is developed to account for this special situation, referred to herein as point correspondence matching.
- In this embodiment, a set of 3D anchor points {Li} are pre-selected on the surface of the 3D model as shown in
FIG. 45 . Given a car image of the same make and model as in the 3D model, there should exists another set of 2D anchor points {li}, which are 2D projections of {Li} on the image plane. - Given the pairwise 3D-to-2D correspondence between each Li and li, the underling camera matrix determines the 3D pose that maps the 3D model onto the 2D image plane could be solved as a Perspective-n-Point problem, as shown in
FIG. 46 . Embodiments of the disclosure call this procedure point based matching. - In this approach, embodiments of the disclosure use anchor point detection to solve for the 3D-to-2D correspondence, and the 3D pose is estimated through point based matching.
- In order to find the above mentioned 3D-to-2D correspondence, one embodiment of the disclosure treats each 2D projection li as an individual object to detect within the target image. In other words, given each 3D anchor point Li and a target image, embodiments of the disclosure infer the location of its 2D projection li on the image plane. In one example implementation, there may be 86 anchor points on a vehicle, so there are 86 locations to detect.
- For anchor point detection, a set of heatmaps can be used to represent the correct anchor point locations on the image plane. Each heatmap is of the same size as the target image, and the intensity value in each pixel is the confidence score denoting how confident the 2D projection li is centered at that location. The entire heatmap is a 2D Gaussian function with its peak centered at the correct location of li. In the implementation of 86 anchor points, there are 86 heatmaps for the anchor points, as shown for example in
FIG. 47 . By searching for the maximum peak of the heatmap, embodiments of the disclosure can find anchor point's location. This process is called heatmap-based localization. - By introducing the heatmap based localization, the anchor point detection task is converted into predicting a per-pixel confidence map for each individual anchor point. The prediction procedure is formulated as a per-pixel least square regression, and the model is tuned to generate heatmaps that minimize the total least square error with the ground truth.
- To leveraged the power of deep learning on this task, some embodiments use an encoder-decoder shaped neural network for heatmaps prediction. The encoder part of this network is composed of a series of convolutional layers and intermediate max pooling layers. The output is a down-sampled feature map extracted from the input image. Following the down-sampling encoder network, there is an up-sampling decoder network. A series of transposed convolutional layers, also called deconvolutional layers, are applied to up sample the feature maps. Embodiments of the disclosure call the layer that connects the encoder decoder networks the bottleneck layer, because it has the smallest input and output size. Several skip connection layers are bridged between the encoder network and the decoder network, to merge the spatially rich information from low-level features in the encoder network, with the high-level object knowledge in the decoder network.
- In some implementations, not all of the anchor points are visible in the target image. Including predictions of those invisible points into our loss function may degrade the training process. As the model is driven to approximate some targets that do not exists.
- To address this situation, a second learning task can be added to the existing model in some embodiments. The model learns to predict the visibility status of each anchor points, by formulating it as multi-label binary classification. From the bottleneck layer, the model branches out a series of cascaded fully connected layers to predict the visibility status. The encoder network, together with the branched fully connected layers, constitute a regular convolutional network for binary classification.
- To train this multi-task deep learning model, the loss function can be the summation over all individual heatmap regression tasks, and the visibility prediction task.
- The regression task can use the least-square-error between the predicted heatmaps and the ground-truth heatmaps as loss function. During training, the ground-truth visibility status can be used to mask out the losses coming from the invisible anchor points, so they will not present in the final loss function. During inference, the predicted visibility can be used determining which heatmaps to output.
- To formulate the loss function of our model, one embodiment denotes {ĥc} and {{circumflex over (v)}c} as the ground truth heatmap and visibility status for anchor point c, where {circumflex over (v)}c∈{0,1}. Embodiments of the disclosure use hc and vc to denote the predictions of heatmap and visibility status of our network.
- The heatmap regression uses the least-square-error between the predicted heatmaps {hc} and the ground-truth heatmaps {ĥc} as loss function. In some embodiments, the least-square-error from just the visible anchor points should be included in the loss. Embodiments of the disclosure introduce a weighted least-square-error using {{circumflex over (v)}c} as weights, to mask out invisible anchor points where {circumflex over (v)}c=0. The heatmap loss Lh is formulated as:
-
- For the visibility prediction, some embodiments use the per-anchor point cross-entropy as loss function. The loss Lv is formulated as:
-
- Finally, the total loss Ltotal is formulated as:
-
L total =L h +λ*L v - where λ is the weight term to help balance between two losses.
- To train our multi-task deep model, a large amount of car images with annotated anchor point locations and visibility can be used to achieve good generalization in a real application. However, accurate annotations of anchor points are hard to achieve due to the high cost of manual annotation, and the associated inaccuracies due to human error. To obtain such large-scale training data, some embodiments utilize a 3D model combined with a 3D render engine to generate a synthetic training dataset to train our model. This is similar to the data augmentation described with respect to 2D images.
- For example, the rendered images from a 3D model tend to be homogeneous in lighting condition, color, and texture, thus may cause over-fitting. To compensate for that, some embodiments randomly add light sources of different locations and intensity, and adjust part colors of the 3D model to introduce more variations in our synthetic training dataset.
FIG. 48 illustrates rendering synthetic images and determining anchor points for the synthetic images, according to one embodiment. - To further reduce over-fitting, some embodiments add several additional data augmentation strategies during training. For example, some embodiments may randomly crop the original input images, and take the cropped sub-images as training data. Other embodiments as salt and pepper noise to each color channel. Also, random amounts of rectangular masks can be applied to the image to mask out some parts from the car.
- As described, given a set of 3D anchor points {Xi} defined on the surface of 3D model, some embodiments can find their corresponding 2D projections {xi} on the image plane. We now focus on how to solve for the camera matrix that correctly map {Xi} to {xi}.
- As shown in
FIG. 49 , the relationship between each Xi and corresponding xi is defined by the camera matrix, which includes the projection Matrix P, rotation matrix R, and translation matrix T, formulated by the equation: -
x=P[R|T]X - To match each pair of Xi and xi from {Xi} and {xi}, the bellowing loss should be minimized:
-
- For P, R, T, in total there are six (6) parameters to optimize. Some embodiments suppose that focal length is known, thus there are three (3) parameters for rotation and three (3) parameters for translation. A 6-dimension vector p is used to denote the parameters to tune, and the optimization problem is formulated as:
-
- Since the transformation function ƒ(xi;p) is nonlinear, the above formula cannot be solved directly; instead, an iterative approach is applied. This approach iteratively finds an update Δp to the current parameter p by minimizing:
-
- By solving the above formula:
-
Δp=−(J −1)T r - where: r=[r0 . . . ri . . . ]
-
J −1 =[J(X 0 ;p) . . . J(X i ;p) . . . ] - By iteratively updating p using p=p−α(J−1)Tr, embodiments of the disclosure can compute an optimized p.
- To calculate J, embodiments of the disclosure can perform this analytically or numerically. One embodiment performs this numerically, since it may be easier to do this in a program.
- Referring back to
FIG. 27 , in steps 2704-2706-2708-2710-2712, embodiments of the disclosure have shown how to analyze damaged parts from single captured image. This process 2704-2706-2708-2710-2712 can be repeated for each input image. For example, in the context of an auto claim, the most claim cases include multiple images of the same vehicle from different viewpoints. Although each image is analyzed independently, the evaluation results can be integrated together, which is referred to herein as “image fusion.” For example, the disclosed system is able to generate a heatmap, for each image, to indicate the region where the vehicle is damaged in the image. By fusing the 2D heatmaps from different images, embodiments of the disclosure are able to enhance the damage assessment, for example, in case the heatmap for one image is not satisfactory. To do so, embodiments of the disclosure utilize the 3D model of the vehicle, and map each heatmap into a common 3D space, leading to a 3D version of a heatmap that is convenient for damage appraisal. In some aspects, image fusion can be thought of “wrapping” the heatmaps of the 2D images onto the 3D model. - More specifically, at
step 2714, the server performs multi-image damage fusion. In some implementations, a 3D model includes a set of 3D vertices. Each set of three vertices grouped together form a face (i.e., triangle). An example 3D model of car hood is shown inFIG. 50 . - One embodiment for generating the fused 3D heatmap is to assign each vertex or face a color associated with the heatmap. In one implementation, each vertex or face is assigned a heatmap intensity value. To achieve this, one embodiment is to first map each single vertex onto a heatmap image, then use interpolation to get an intensity value in that projected location. The intensity value is assigned to the corresponding 3D vertex, thus back projected to the 3D space.
- In one implementation, before we perform the 3D to 2D mapping, embodiments of the disclosure first manually segment the 3D model into individual parts. By doing so, we can distinguish which image regions are covered by each part after the 3D to 2D mapping. This can improve efficiency, since if we are only interested in one part, then we can only calculate the 3D projection for that single part, rather than calculate the entire car.
FIG. 51 illustrates an example of a segmented 3D model. -
FIG. 52 illustrates the effect of directly projecting the 3D car model onto a 2D image with the estimated camera matrix, where each part is denoted with different color. This figure also demonstrates the benefits of segmenting the 3D model into individual parts, as the result provides a dense pixel-wise fine part segmentation of the 2D car image. - After performing the direct 3D to 2D mapping without considering the surface information, a problem arises. Since the 3D model is composed of many 2D surfaces, some vertices are actually occluded by surfaces when mapped to 2D. This may cause confusion, for those occluded vertices will mistakenly ‘colored’ if we do not consider the occlusion issue.
- One embodiment to remove occlusion involves a two-staged approach, including a back-face culling stage and a depth checking stage.
- In back-face culling, since each vertex in the 3D model is assigned with a normal direction (for example, as in a “.obj” file), according to the back-face culling algorithm, we can connect the camera center with a vertex with a straight line, forming an angle between that line and the norm direction of that vertex. If the angle is larger than 90 degrees, then the vertex should be invisible to the camera; whereas, if the angle is smaller than 90 degree, then the vertex is visible, as illustrated in
FIG. 53 . - Back-face culling removes most of the occluded vertices, but still may not solve all the problem entirely, since sometimes a surface facing towards the camera can still be occluded. Thus, some embodiments also perform a second stage occlusion removal method referred to as “depth checking stage.”
- In some implementations, a rendering engine can provide a rendered depth map of the 3D model respective to a certain view point, which denotes the distance between each vertex to the camera center. With such a depth map, embodiments of the disclosure can assign each 3D vertex a rendered distance to the camera center by interpolation (similar to the above described heatmap). Then those rendered ‘distances’ are compared with their true distance calculated using 3D vertex location. If the rendered ‘distance’ is smaller than the true distance, this means that occlusion has happened in that vertex.
- By using the two-stage occlusion remove method, embodiments of the disclosure can effective remove each occluded vertex when performing single-
view 3D to 2D projection. As shown in the example inFIG. 54 , the desired heatmap intensity values are mapped onto visible region of the 3D model. The occluded region is shown with a different color. - Using the above procedure, embodiments of the disclosure map the heatmap intensity of one single view to the 3D model. This process can be repeated for each image separately, and the results are summed together. Alternatively, if multiple heatmaps correspond to the same vertex, the maximum value in each set can be selected as the final per-vertex heatmap intensity. Still further embodiments may use the mean value as the final per-vertex heatmap intensity if multiple heatmaps correspond to the same vertex.
FIG. 55 illustrates an example of fusing multiple 2D images with heatmaps onto a 3D model. - Once the 3D model has the heatmap fused onto it, the server is able to determine which parts are damaged by comparing the heatmap-fused 3D model to a 3D model of an undamaged vehicle.
- As described, the method shown in
FIG. 27 provides a technique to identify which external parts of a vehicle are damaged, and also which portions of those parts are damaged. As described inFIG. 4 atstep 406, from this information, the server may also infer internal damage to the vehicle from detected external damage. Once the externally damaged parts are identified, the server can look up in a database which internal parts are also likely to be repaired or replaced based on the set of damaged external parts. This inference can be based on historical models for which internal parts needed to be replaced given certain external damage in prior repairs. As also described inFIG. 4 atstep 408, the server can calculate an estimated repair cost for the vehicle based on the detected external damage and inferred internal damage. The server accesses one or more databases of parts and labor cost for each external and internal part that is estimated to need repair or replacement. The estimate can be provided to an insurance claim adjuster for review, adjustment, and approval. - All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
- The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
- Preferred embodiments of this invention are described herein, including the best mode known to the inventors for carrying out the invention. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for the invention to be practiced otherwise than as specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/973,343 US11144889B2 (en) | 2016-04-06 | 2018-05-07 | Automatic assessment of damage and repair costs in vehicles |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/092,480 US10692050B2 (en) | 2016-04-06 | 2016-04-06 | Automatic assessment of damage and repair costs in vehicles |
US15/973,343 US11144889B2 (en) | 2016-04-06 | 2018-05-07 | Automatic assessment of damage and repair costs in vehicles |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/092,480 Continuation-In-Part US10692050B2 (en) | 2016-04-06 | 2016-04-06 | Automatic assessment of damage and repair costs in vehicles |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180260793A1 true US20180260793A1 (en) | 2018-09-13 |
US11144889B2 US11144889B2 (en) | 2021-10-12 |
Family
ID=63444829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/973,343 Active 2036-10-06 US11144889B2 (en) | 2016-04-06 | 2018-05-07 | Automatic assessment of damage and repair costs in vehicles |
Country Status (1)
Country | Link |
---|---|
US (1) | US11144889B2 (en) |
Cited By (210)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180121888A1 (en) * | 2017-12-20 | 2018-05-03 | Patrick Richard O'Reilly | System and method for improved vehicle collision damage estimating and repair |
US20180158209A1 (en) * | 2016-12-02 | 2018-06-07 | Gabriel Fine | Automatically determining orientation and position of medically invasive devices via image processing |
US20180293552A1 (en) * | 2017-04-11 | 2018-10-11 | Alibaba Group Holding Limited | Image-based vehicle maintenance plan |
US20180349741A1 (en) * | 2017-05-31 | 2018-12-06 | Fujitsu Limited | Computer-readable recording medium, learning method, and object detection device |
CN109215119A (en) * | 2018-09-18 | 2019-01-15 | 阿里巴巴集团控股有限公司 | The three-dimension modeling method and device of damaged vehicle |
CN109410218A (en) * | 2018-10-08 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating vehicle damage information |
CN109544623A (en) * | 2018-10-11 | 2019-03-29 | 百度在线网络技术(北京)有限公司 | The measurement method and device in vehicle damage region |
CN109614914A (en) * | 2018-12-05 | 2019-04-12 | 北京纵目安驰智能科技有限公司 | Parking space vertex positioning method, device and storage medium |
CN109614935A (en) * | 2018-12-12 | 2019-04-12 | 泰康保险集团股份有限公司 | Car damage identification method and device, storage medium and electronic equipment |
US20190114597A1 (en) * | 2017-10-16 | 2019-04-18 | Mitchell International, Inc. | Methods for predictive estimation of repair lines based on historical data and devices thereof |
CN109685780A (en) * | 2018-12-17 | 2019-04-26 | 河海大学 | A kind of Retail commodity recognition methods based on convolutional neural networks |
US20190188523A1 (en) * | 2016-05-09 | 2019-06-20 | Uesse S.R.L. | Process and System for Computing the Cost of Usable and Consumable Materials for Painting of Motor Vehicles, From Analysis of Deformations in Motor Vehicles |
US10402977B1 (en) * | 2019-01-25 | 2019-09-03 | StradVision, Inc. | Learning method and learning device for improving segmentation performance in road obstacle detection required to satisfy level 4 and level 5 of autonomous vehicles using laplacian pyramid network and testing method and testing device using the same |
US10410352B1 (en) * | 2019-01-25 | 2019-09-10 | StradVision, Inc. | Learning method and learning device for improving segmentation performance to be used for detecting events including pedestrian event, vehicle event, falling event and fallen event using edge loss and test method and test device using the same |
CN110287760A (en) * | 2019-03-28 | 2019-09-27 | 电子科技大学 | A method for occlusion detection of facial facial features based on deep learning |
CN110472656A (en) * | 2019-07-03 | 2019-11-19 | 平安科技(深圳)有限公司 | Vehicle image classification method, device, computer equipment and storage medium |
US10489683B1 (en) * | 2018-12-17 | 2019-11-26 | Bodygram, Inc. | Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks |
US20190362480A1 (en) * | 2018-05-22 | 2019-11-28 | Midea Group Co., Ltd. | Methods and system for improved quality inspection |
US10565476B1 (en) * | 2018-09-04 | 2020-02-18 | StradVision, Inc. | Method and computing device for generating image data set for learning to be used for detection of obstruction in autonomous driving circumstances and learning method and learning device using the same |
CN110837789A (en) * | 2019-10-31 | 2020-02-25 | 北京奇艺世纪科技有限公司 | Method and device for detecting object, electronic equipment and medium |
US20200089990A1 (en) * | 2018-09-18 | 2020-03-19 | Alibaba Group Holding Limited | Method and apparatus for vehicle damage identification |
US20200111061A1 (en) * | 2018-10-03 | 2020-04-09 | Solera Holdings, Inc. | Apparatus and Method for Combined Visual Intelligence |
US10628890B2 (en) * | 2017-02-23 | 2020-04-21 | International Business Machines Corporation | Visual analytics based vehicle insurance anti-fraud detection |
US20200151974A1 (en) * | 2018-11-08 | 2020-05-14 | Verizon Patent And Licensing Inc. | Computer vision based vehicle inspection report automation |
US10685400B1 (en) * | 2012-08-16 | 2020-06-16 | Allstate Insurance Company | Feedback loop in mobile damage assessment and claims processing |
US20200211303A1 (en) * | 2018-12-26 | 2020-07-02 | Allstate Insurance Company | Systems and methods for system generated damage analysis |
US20200234488A1 (en) * | 2019-01-22 | 2020-07-23 | Fyusion, Inc. | Damage detection from multi-view visual data |
CN111489433A (en) * | 2020-02-13 | 2020-08-04 | 北京百度网讯科技有限公司 | Vehicle damage positioning method and device, electronic equipment and readable storage medium |
CN111612741A (en) * | 2020-04-22 | 2020-09-01 | 杭州电子科技大学 | An accurate no-reference image quality assessment method based on distortion identification |
CN111612066A (en) * | 2020-05-21 | 2020-09-01 | 成都理工大学 | Remote sensing image classification method based on deep fusion convolutional neural network |
US10783643B1 (en) * | 2019-05-27 | 2020-09-22 | Alibaba Group Holding Limited | Segmentation-based damage detection |
US10789786B2 (en) | 2017-04-11 | 2020-09-29 | Alibaba Group Holding Limited | Picture-based vehicle loss assessment |
US10803328B1 (en) * | 2017-11-15 | 2020-10-13 | Uatc, Llc | Semantic and instance segmentation |
US10803532B1 (en) | 2012-08-16 | 2020-10-13 | Allstate Insurance Company | Processing insured items holistically with mobile damage assessment and claims processing |
US10817956B2 (en) | 2017-04-11 | 2020-10-27 | Alibaba Group Holding Limited | Image-based vehicle damage determining method and apparatus, and electronic device |
US20200342304A1 (en) * | 2019-04-25 | 2020-10-29 | International Business Machines Corporation | Feature importance identification in deep learning models |
US10825097B1 (en) * | 2016-12-23 | 2020-11-03 | State Farm Mutual Automobile Insurance Company | Systems and methods for utilizing machine-assisted vehicle inspection to identify insurance buildup or fraud |
CN111899169A (en) * | 2020-07-02 | 2020-11-06 | 佛山市南海区广工大数控装备协同创新研究院 | A method for segmentation network of face image based on semantic segmentation |
US10846716B1 (en) * | 2019-12-27 | 2020-11-24 | Capital One Services, Llc | System and method for facilitating training of a prediction model to estimate a user vehicle damage tolerance |
CN112017065A (en) * | 2020-08-27 | 2020-12-01 | 中国平安财产保险股份有限公司 | Vehicle loss assessment and claim settlement method and device and computer readable storage medium |
ES2799828A1 (en) * | 2019-06-17 | 2020-12-21 | Quibim S L | METHOD AND SYSTEM TO IDENTIFY ANOMALIES IN X-RAYS (Machine-translation by Google Translate, not legally binding) |
EP3754603A1 (en) * | 2019-06-19 | 2020-12-23 | Deere & Company | Apparatus and methods for augmented reality vehicle condition inspection |
CN112149793A (en) * | 2019-06-27 | 2020-12-29 | 三星电子株式会社 | Artificial neural network model and electronic device including the same |
WO2021001337A1 (en) * | 2019-07-03 | 2021-01-07 | Ocado Innovation Limited | A damage detection apparatus and method |
WO2021022094A1 (en) * | 2019-07-30 | 2021-02-04 | Dolby Laboratories Licensing Corporation | Per-epoch data augmentation for training acoustic models |
CN112348799A (en) * | 2020-11-11 | 2021-02-09 | 德联易控科技(北京)有限公司 | Vehicle damage assessment method and device, terminal equipment and storage medium |
US10949814B1 (en) * | 2019-05-09 | 2021-03-16 | Ccc Information Services Inc. | Intelligent vehicle repair estimation system |
WO2021055988A1 (en) | 2019-09-22 | 2021-03-25 | Kar Auction Services, Inc. | Vehicle self-inspection apparatus and method |
US20210097505A1 (en) * | 2019-09-30 | 2021-04-01 | Mitchell International, Inc. | Automated vehicle repair estimation by preferential ensembling of multiple artificial intelligence functions |
US10970599B2 (en) * | 2018-11-15 | 2021-04-06 | Adobe Inc. | Learning copy space using regression and segmentation neural networks |
US10970835B1 (en) * | 2020-01-13 | 2021-04-06 | Capital One Services, Llc | Visualization of damage on images |
US10989838B2 (en) | 2015-04-14 | 2021-04-27 | Utopus Insights, Inc. | Weather-driven multi-category infrastructure impact forecasting |
US20210124967A1 (en) * | 2018-08-24 | 2021-04-29 | Advanced New Technologies Co., Ltd. | Method and apparatus for sample labeling, and method and apparatus for identifying damage classification |
US20210133501A1 (en) * | 2018-09-04 | 2021-05-06 | Advanced New Technologies Co., Ltd. | Method and apparatus for generating vehicle damage image on the basis of gan network |
US11005961B2 (en) * | 2017-11-20 | 2021-05-11 | Marc Berger | Ad-hoc low power low cost communication via a network of electronic stickers |
US20210142590A1 (en) * | 2018-12-26 | 2021-05-13 | Allstate Insurance Company | System generated damage analysis using scene templates |
US11010838B2 (en) * | 2018-08-31 | 2021-05-18 | Advanced New Technologies Co., Ltd. | System and method for optimizing damage detection results |
JP2021089512A (en) * | 2019-12-03 | 2021-06-10 | 京セラドキュメントソリューションズ株式会社 | Image processing device |
US11042978B2 (en) * | 2018-09-10 | 2021-06-22 | Advanced New Technologies Co., Ltd. | Method and apparatus for performing damage segmentation on vehicle damage image |
EP3839822A1 (en) * | 2019-12-16 | 2021-06-23 | Accenture Global Solutions Limited | Explainable artificial intelligence (ai) based image analytic, automatic damage detection and estimation system |
US11049233B2 (en) * | 2019-01-14 | 2021-06-29 | Ford Global Technologies, Llc | Systems and methods for detecting and reporting vehicle damage events |
WO2021136938A1 (en) * | 2020-01-03 | 2021-07-08 | Tractable Ltd | Method of determining repair operations for a damaged vehicle including using domain confusion loss techniques |
US20210217208A1 (en) * | 2020-01-14 | 2021-07-15 | Capital One Services, Llc | Vehicle information photo overlay |
US11068549B2 (en) * | 2019-11-15 | 2021-07-20 | Capital One Services, Llc | Vehicle inventory search recommendation using image analysis driven by machine learning |
US11070763B2 (en) * | 2018-06-27 | 2021-07-20 | Snap-On Incorporated | Method and system for displaying images captured by a computing device including a visible light camera and a thermal camera |
US11080839B2 (en) * | 2018-08-31 | 2021-08-03 | Advanced New Technologies Co., Ltd. | System and method for training a damage identification model |
US20210248681A1 (en) * | 2020-02-07 | 2021-08-12 | International Business Machines Corporation | Detecting vehicle identity and damage status using single video analysis |
WO2021175006A1 (en) * | 2020-03-04 | 2021-09-10 | 深圳壹账通智能科技有限公司 | Vehicle image detection method and apparatus, and computer device and storage medium |
DE102020106752A1 (en) | 2020-03-12 | 2021-09-16 | Bayerische Motoren Werke Aktiengesellschaft | Device and method for training an image processing system |
US20210287530A1 (en) * | 2020-03-11 | 2021-09-16 | Allstate Insurance Company | Applying machine learning to telematics data to predict accident outcomes |
US20210295966A1 (en) * | 2018-11-21 | 2021-09-23 | Enlitic, Inc. | Intensity transform augmentation system and methods for use therewith |
US11138562B2 (en) * | 2020-01-17 | 2021-10-05 | Dell Products L.P. | Automatic processing of device damage claims using artificial intelligence |
US11138410B1 (en) * | 2020-08-25 | 2021-10-05 | Covar Applied Technologies, Inc. | 3-D object detection and classification from imagery |
US20210312702A1 (en) * | 2019-01-22 | 2021-10-07 | Fyusion, Inc. | Damage detection from multi-view visual data |
US20210314551A1 (en) * | 2020-04-03 | 2021-10-07 | Fanuc Corporation | 3d pose detection by multiple 2d cameras |
US11144889B2 (en) * | 2016-04-06 | 2021-10-12 | American International Group, Inc. | Automatic assessment of damage and repair costs in vehicles |
US11150875B2 (en) * | 2018-09-27 | 2021-10-19 | Microsoft Technology Licensing, Llc | Automated content editor |
US20210327042A1 (en) * | 2018-12-31 | 2021-10-21 | Agilesoda Inc. | Deep learning-based system and method for automatically determining degree of damage to each area of vehicle |
US20210350713A1 (en) * | 2020-05-08 | 2021-11-11 | The Travelers Indemnity Company | Systems and methods for autonomous hazardous area data collection |
US11176704B2 (en) | 2019-01-22 | 2021-11-16 | Fyusion, Inc. | Object pose estimation in visual data |
US20210358105A1 (en) * | 2020-05-14 | 2021-11-18 | Ccc Information Services Inc. | Image processing system using recurrent neural networks |
US11188853B2 (en) * | 2019-09-30 | 2021-11-30 | The Travelers Indemnity Company | Systems and methods for artificial intelligence (AI) damage triage and dynamic resource allocation, routing, and scheduling |
US20210374871A1 (en) * | 2020-05-28 | 2021-12-02 | Jonathan PYLE | Vehicle Repair Estimation System and Method |
JP2021189899A (en) * | 2020-06-02 | 2021-12-13 | 株式会社リクルート | Information processing equipment, information processing method, information processing program |
US11200513B2 (en) * | 2017-10-13 | 2021-12-14 | Carrier Corporation | Real estate image analysis |
US20210406693A1 (en) * | 2020-06-25 | 2021-12-30 | Nxp B.V. | Data sample analysis in a dataset for a machine learning model |
US20220028045A1 (en) * | 2020-07-22 | 2022-01-27 | Crash Point Systems, Llc | Capturing vehicle data and assessing vehicle damage |
US20220028188A1 (en) * | 2020-07-21 | 2022-01-27 | CarDr.com | Mobile vehicle inspection system |
US11257132B1 (en) * | 2018-05-04 | 2022-02-22 | Allstate Insurance Company | Processing systems and methods having a machine learning engine for providing a surface dimension output |
US11270168B1 (en) * | 2018-03-02 | 2022-03-08 | Autodata Solutions, Inc. | Method and system for vehicle image classification |
US20220084234A1 (en) * | 2020-09-17 | 2022-03-17 | GIST(Gwangju Institute of Science and Technology) | Method and electronic device for identifying size of measurement target object |
CN114241398A (en) * | 2022-02-23 | 2022-03-25 | 深圳壹账通科技服务有限公司 | Vehicle damage assessment method, device, equipment and storage medium based on artificial intelligence |
CN114268621A (en) * | 2021-12-21 | 2022-04-01 | 东方数科(北京)信息技术有限公司 | Deep learning-based digital instrument meter reading method and device |
US20220114561A1 (en) * | 2019-01-04 | 2022-04-14 | Robert Lee Watts | Systems and methods for repair of vehicle body damage |
US20220114627A1 (en) * | 2018-06-15 | 2022-04-14 | State Farm Mutual Automobile Insurance Company | Methods and systems for automatic processing of images of a damaged vehicle and estimating a repair cost |
US11315239B1 (en) * | 2017-11-22 | 2022-04-26 | State Farm Mutual Automobile Insurance Company | Guided vehicle capture for virtual mode generation |
WO2022094621A1 (en) * | 2020-10-30 | 2022-05-05 | Tractable, Inc. | Remote vehicle damage assessment |
US11328542B2 (en) * | 2019-11-01 | 2022-05-10 | Fu Tai Hua Industry (Shenzhen) Co., Ltd. | Method for reporting faults in shareable vehicles and parking device employing the method |
US11328402B2 (en) | 2020-09-29 | 2022-05-10 | Hong Kong Applied Science and Technology Research Institute Company Limited | Method and system of image based anomaly localization for vehicles through generative contextualized adversarial network |
US20220155945A1 (en) * | 2020-11-17 | 2022-05-19 | Fyusion, Inc. | Damage detection portal |
US20220156497A1 (en) * | 2020-11-17 | 2022-05-19 | Fyusion, Inc. | Multi-view visual data damage detection |
WO2022100454A1 (en) * | 2020-11-13 | 2022-05-19 | 深圳壹账通智能科技有限公司 | Vehicle damage assessment method, apparatus, terminal device and storage medium |
US20220156530A1 (en) * | 2020-11-13 | 2022-05-19 | Salesforce.Com, Inc. | Systems and methods for interpolative centroid contrastive learning |
US11341699B1 (en) * | 2021-03-09 | 2022-05-24 | Carmax Enterprise Services, Llc | Systems and methods for synthetic image generation |
US11361385B2 (en) | 2012-08-16 | 2022-06-14 | Allstate Insurance Company | Application facilitated claims damage estimation |
US11367144B2 (en) | 2012-08-16 | 2022-06-21 | Allstate Insurance Company | Agent-facilitated claims damage estimation |
US11373313B2 (en) * | 2019-08-08 | 2022-06-28 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20220205224A1 (en) * | 2019-09-19 | 2022-06-30 | Sumitomo Heavy Industries, Ltd. | Excavator and management apparatus for excavator |
US11386401B2 (en) * | 2019-01-20 | 2022-07-12 | Mitchell Repair Information Company, Llc | Methods and systems to provide packages of repair information based on component identifiers |
US11386328B2 (en) * | 2018-05-30 | 2022-07-12 | Robert Bosch Gmbh | Method, apparatus and computer program for generating robust automated learning systems and testing trained automated learning systems |
US11392792B2 (en) * | 2018-10-08 | 2022-07-19 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for generating vehicle damage information |
US20220230321A1 (en) * | 2021-01-15 | 2022-07-21 | Adobe Inc. | Generating class-agnostic object masks in digital images |
US20220237822A1 (en) * | 2021-01-27 | 2022-07-28 | The Boeing Company | Vehicle pose determination systems and methods |
CN114817991A (en) * | 2022-05-10 | 2022-07-29 | 上海计算机软件技术开发中心 | A method and system for image desensitization of Internet of Vehicles |
CN114842198A (en) * | 2022-05-31 | 2022-08-02 | 平安科技(深圳)有限公司 | Intelligent loss assessment method, device and equipment for vehicle and storage medium |
US11403713B2 (en) | 2012-08-16 | 2022-08-02 | Allstate Insurance Company | Configuration and transfer of image data using a mobile device |
US11409988B2 (en) * | 2019-01-17 | 2022-08-09 | Fujitsu Limited | Method, recording medium, and device for utilizing feature quantities of augmented training data |
US11410287B2 (en) * | 2019-09-09 | 2022-08-09 | Genpact Luxembourg S.à r.l. II | System and method for artificial intelligence based determination of damage to physical structures |
US11410549B2 (en) * | 2018-05-31 | 2022-08-09 | Boe Technology Group Co., Ltd. | Method, device, readable medium and electronic device for identifying traffic light signal |
WO2022169622A1 (en) * | 2021-02-04 | 2022-08-11 | Carnegie Mellon University | Soft anchor point object detection |
US20220262083A1 (en) * | 2021-02-18 | 2022-08-18 | Inait Sa | Annotation of 3d models with signs of use visible in 2d images |
US20220261830A1 (en) * | 2019-06-28 | 2022-08-18 | Fair Ip, Llc | Machine learning engine for demand-based pricing |
FR3119916A1 (en) * | 2021-02-17 | 2022-08-19 | Continental Automotive Gmbh | Extrinsic calibration of a camera mounted on a vehicle chassis |
WO2022175044A1 (en) * | 2021-02-18 | 2022-08-25 | Inait Sa | Annotation of 3d models with signs of use visible in 2d images |
US11430042B2 (en) | 2020-04-30 | 2022-08-30 | Capital One Services, Llc | Methods and systems for providing a vehicle recommendation |
US11436648B1 (en) * | 2018-05-04 | 2022-09-06 | Allstate Insurance Company | Processing system having a machine learning engine for providing a surface dimension output |
US11436755B2 (en) * | 2020-08-09 | 2022-09-06 | Google Llc | Real-time pose estimation for unseen objects |
US11443288B2 (en) | 2016-04-06 | 2022-09-13 | American International Group, Inc. | Automatic assessment of damage and repair costs in vehicles |
US11443192B2 (en) * | 2018-12-29 | 2022-09-13 | Dassault Systemes | Machine-learning for 3D modeled object inference |
EP3915050A4 (en) * | 2019-01-22 | 2022-09-14 | Fyusion, Inc. | Damage detection from multi-view visual data |
WO2022192472A1 (en) * | 2021-03-10 | 2022-09-15 | Neural Claim System, Inc. | Methods and systems for submitting and/or processing insurance claims for damaged motor vehicle glass |
US11455691B2 (en) | 2012-08-16 | 2022-09-27 | Allstate Insurance Company | Processing insured items holistically with mobile damage assessment and claims processing |
US11455496B2 (en) | 2019-04-02 | 2022-09-27 | Synthesis Ai, Inc. | System and method for domain adaptation using synthetic data |
US20220309602A1 (en) * | 2021-03-24 | 2022-09-29 | Panasonic Intellectual Property Management Co., Ltd. | Vehicle monitoring apparatus, vehicle monitoring system, and vehicle monitoring method |
US11461890B2 (en) * | 2020-02-05 | 2022-10-04 | Fulpruf Technology Corporation | Vehicle supply chain damage tracking system |
US11461872B1 (en) | 2018-03-02 | 2022-10-04 | Autodata Solutions, Inc. | Method and system for vehicle image repositioning |
US11461849B2 (en) | 2012-09-10 | 2022-10-04 | Allstate Insurance Company | Recommendation of insurance products based on an inventory analysis |
US11475660B2 (en) | 2018-08-31 | 2022-10-18 | Advanced New Technologies Co., Ltd. | Method and system for facilitating recognition of vehicle parts based on a neural network |
US11488371B2 (en) | 2020-12-17 | 2022-11-01 | Concat Systems, Inc. | Machine learning artificial intelligence system for producing 360 virtual representation of an object |
US11488117B2 (en) * | 2020-08-27 | 2022-11-01 | Mitchell International, Inc. | Systems and methods for managing associations between damaged parts and non-reusable parts in a collision repair estimate |
US20220351066A1 (en) * | 2021-04-29 | 2022-11-03 | Dell Products L.P. | System and Method for Identification of Replacement Parts Using Artificial Intelligence/Machine Learning |
US20220351298A1 (en) * | 2021-04-30 | 2022-11-03 | Alan Martin | System Facilitating Government Mitigation of Damage at Public and Private Facilities |
US20220358757A1 (en) * | 2021-05-10 | 2022-11-10 | Ccc Intelligent Solutions Inc. | Image processing systems for detecting types of damage to a vehicle |
US11514530B2 (en) * | 2020-05-14 | 2022-11-29 | Ccc Information Services Inc. | Image processing system using convolutional neural networks |
US11538286B2 (en) * | 2019-05-06 | 2022-12-27 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for vehicle damage assessment, electronic device, and computer storage medium |
US20230012230A1 (en) * | 2019-12-02 | 2023-01-12 | Click-Ins, Ltd. | Systems, methods and programs for generating damage print in a vehicle |
US11562207B2 (en) | 2018-12-29 | 2023-01-24 | Dassault Systemes | Set of neural networks |
US11562474B2 (en) | 2020-01-16 | 2023-01-24 | Fyusion, Inc. | Mobile multi-camera multi-view capture |
WO2023000737A1 (en) * | 2021-07-23 | 2023-01-26 | 明觉科技(北京)有限公司 | Vehicle accident loss assessment method and apparatus |
US11574366B1 (en) * | 2019-04-17 | 2023-02-07 | State Farm Mutual Automobile Insurance Company | Method and system for early identification and settlement of total loss claims |
US11587224B2 (en) | 2020-01-14 | 2023-02-21 | Capital One Services, Llc | Vehicle listing image detection and alert system |
US11587180B2 (en) * | 2020-05-14 | 2023-02-21 | Ccc Information Services Inc. | Image processing system |
US11587315B2 (en) | 2019-06-19 | 2023-02-21 | Deere & Company | Apparatus and methods for augmented reality measuring of equipment |
US20230063002A1 (en) * | 2021-08-25 | 2023-03-02 | Genpact Luxembourg S.à r.l. II | Dimension estimation using duplicate instance identification in a multiview and multiscale system |
US20230067659A1 (en) * | 2021-08-24 | 2023-03-02 | Ford Global Technologies, Llc | Systems and methods for detecting vehicle defects |
US11605151B2 (en) | 2021-03-02 | 2023-03-14 | Fyusion, Inc. | Vehicle undercarriage imaging |
US11610074B1 (en) * | 2017-06-29 | 2023-03-21 | State Farm Mutual Automobile Insurance Company | Deep learning image processing method for determining vehicle damage |
US11620530B2 (en) * | 2019-01-17 | 2023-04-04 | Fujitsu Limited | Learning method, and learning apparatus, and recording medium |
WO2023055794A1 (en) * | 2021-09-28 | 2023-04-06 | Watts Robert Lee | Systems and methods for repair of vehicle body damage |
US11625791B1 (en) | 2012-08-16 | 2023-04-11 | Allstate Insurance Company | Feedback loop in mobile damage assessment and claims processing |
CN115965448A (en) * | 2023-03-16 | 2023-04-14 | 邦邦汽车销售服务(北京)有限公司 | Vehicle maintenance accessory recommendation method and system based on image processing |
US11631165B2 (en) * | 2020-01-31 | 2023-04-18 | Sachcontrol Gmbh | Repair estimation based on images |
US11631234B2 (en) | 2019-07-22 | 2023-04-18 | Adobe, Inc. | Automatically detecting user-requested objects in images |
WO2023091859A1 (en) * | 2021-11-16 | 2023-05-25 | Solera Holdings, Llc | Transfer of damage markers from images to 3d vehicle models for damage assessment |
US11676030B2 (en) * | 2019-01-17 | 2023-06-13 | Fujitsu Limited | Learning method, learning apparatus, and computer-readable recording medium |
US11681919B2 (en) | 2020-03-12 | 2023-06-20 | Adobe Inc. | Automatically selecting query objects in digital images |
US20230230172A1 (en) * | 2020-05-28 | 2023-07-20 | Jonathan PYLE | Vehicle Repair Estimation System and Method |
US11710158B2 (en) | 2020-01-23 | 2023-07-25 | Ford Global Technologies, Llc | Vehicle damage estimation |
US11720572B2 (en) | 2018-01-08 | 2023-08-08 | Advanced New Technologies Co., Ltd. | Method and system for content recommendation |
EP4224417A1 (en) * | 2022-01-31 | 2023-08-09 | Vehicle Service Group, LLC | Assessing damages on vehicles |
EP4028977A4 (en) * | 2019-09-09 | 2023-09-27 | Neural Claim System, Inc. | Methods and systems for submitting and/or processing insurance claims for damaged motor vehicle glass |
US11776142B2 (en) | 2020-01-16 | 2023-10-03 | Fyusion, Inc. | Structuring visual data |
US11783443B2 (en) | 2019-01-22 | 2023-10-10 | Fyusion, Inc. | Extraction of standardized images from a single view or multi-view capture |
US11798095B1 (en) | 2020-03-30 | 2023-10-24 | Allstate Insurance Company | Commercial claim processing platform using machine learning to generate shared economy insights |
US11798088B1 (en) | 2012-09-10 | 2023-10-24 | Allstate Insurance Company | Optimized inventory analysis for insurance purposes |
US11797847B2 (en) | 2019-07-22 | 2023-10-24 | Adobe Inc. | Selecting instances of detected objects in images utilizing object detection models |
WO2023205220A1 (en) * | 2022-04-19 | 2023-10-26 | Tractable Ltd | Remote vehicle inspection |
US20230377047A1 (en) * | 2022-05-18 | 2023-11-23 | The Toronto-Dominion Bank | Systems and methods for automated data processing using machine learning for vehicle loss detection |
US20230410271A1 (en) * | 2022-06-16 | 2023-12-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vehicle assessment systems and methods |
US11861665B2 (en) | 2022-02-28 | 2024-01-02 | Concat Systems, Inc. | Artificial intelligence machine learning system for classifying images and producing a predetermined visual output |
US20240013266A1 (en) * | 2020-01-31 | 2024-01-11 | Cognivision Inc. | Estimation apparatus, estimation system, and estimation method |
US11886494B2 (en) | 2020-02-25 | 2024-01-30 | Adobe Inc. | Utilizing natural language processing automatically select objects in images |
US11915479B1 (en) | 2019-12-30 | 2024-02-27 | Scope Technologies Holdings Limited | Systems and methods for automatedly identifying, documenting and reporting vehicle damage |
US20240087330A1 (en) * | 2022-09-12 | 2024-03-14 | Mitchell International, Inc. | System and method for automatically identifying vehicle panels requiring paint blending |
US11935219B1 (en) * | 2020-04-10 | 2024-03-19 | Allstate Insurance Company | Systems and methods for automated property damage estimations and detection based on image analysis and neural network training |
US20240104709A1 (en) * | 2022-09-27 | 2024-03-28 | Hyundai Mobis Co., Ltd. | Method and system for providing vehicle exterior damage determination service |
US11971953B2 (en) | 2021-02-02 | 2024-04-30 | Inait Sa | Machine annotation of photographic images |
US11972569B2 (en) | 2021-01-26 | 2024-04-30 | Adobe Inc. | Segmenting objects in digital images utilizing a multi-object segmentation model framework |
US11978000B2 (en) | 2018-02-01 | 2024-05-07 | Advanced New Technologies Co., Ltd. | System and method for determining a decision-making strategy |
US11983745B2 (en) | 2021-08-06 | 2024-05-14 | Capital One Services, Llc | Systems and methods for valuation of a vehicle |
US12020414B2 (en) | 2019-07-22 | 2024-06-25 | Adobe Inc. | Utilizing deep neural networks to automatically select instances of detected objects in images |
US12026602B1 (en) * | 2018-08-01 | 2024-07-02 | State Farm Mutual Automobile Insurance Company | Vehicle damage claims self-service |
US12118752B2 (en) | 2019-07-22 | 2024-10-15 | Adobe Inc. | Determining colors of objects in digital images |
US12118779B1 (en) * | 2021-09-30 | 2024-10-15 | United Services Automobile Association (Usaa) | System and method for assessing structural damage in occluded aerial images |
US20240378809A1 (en) * | 2023-05-12 | 2024-11-14 | Adobe Inc. | Digital image decaling |
US12175540B2 (en) | 2012-08-16 | 2024-12-24 | Allstate Insurance Company | Processing insured items holistically with mobile damage assessment and claims processing |
US12182881B2 (en) | 2012-08-16 | 2024-12-31 | Allstate Insurance Company | User devices in claims damage estimation |
EP4492338A1 (en) * | 2023-07-10 | 2025-01-15 | Cambridge Mobile Telematics Inc. | Automating vehicle damage inspection using claims photos |
US12205266B2 (en) | 2021-02-05 | 2025-01-21 | Fyusion, Inc. | Multi-view interactive digital media representation viewer |
US12204869B2 (en) | 2019-01-22 | 2025-01-21 | Fyusion, Inc. | Natural language understanding for visual tagging |
US12229383B2 (en) | 2020-09-09 | 2025-02-18 | State Farm Mutual Automobile Insurance Company | Vehicular incident reenactment using three-dimensional (3D) representations |
US12229462B2 (en) | 2022-02-28 | 2025-02-18 | Freddy Technologies Llc | System and method for automatically curating and displaying images |
US12235928B2 (en) | 2021-02-02 | 2025-02-25 | Inait Sa | Machine annotation of photographic images |
US12243170B2 (en) | 2019-01-22 | 2025-03-04 | Fyusion, Inc. | Live in-camera overlays |
US12244784B2 (en) | 2019-07-29 | 2025-03-04 | Fyusion, Inc. | Multiview interactive digital media representation inventory verification |
US12272122B1 (en) * | 2022-08-11 | 2025-04-08 | Amazon Technologies, Inc. | Techniques for optimizing object detection frameworks |
RU2838349C1 (en) * | 2024-10-30 | 2025-04-14 | Общество с ограниченной ответственностью "Тетрон" | Method for automated pre-trip technical inspection of vehicles |
EP4539485A1 (en) * | 2023-10-12 | 2025-04-16 | ACV Auctions Inc. | Systems and methods for guiding the capture of vehicle images and videos |
US12361538B1 (en) * | 2024-04-07 | 2025-07-15 | Uveye Ltd. | Detection and estimation of defects in vehicle's exterior |
US12393987B1 (en) * | 2019-03-08 | 2025-08-19 | State Farm Mutual Automobile Insurance Company | Methods and apparatus for automated insurance claim processing using historical data |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108550080A (en) * | 2018-03-16 | 2018-09-18 | 阿里巴巴集团控股有限公司 | Article damage identification method and device |
US11816641B2 (en) * | 2018-09-21 | 2023-11-14 | Ttx Company | Systems and methods for task distribution and tracking |
KR102096386B1 (en) * | 2018-12-31 | 2020-04-03 | 주식회사 애자일소다 | Method and system of learning a model that automatically determines damage information for each part of an automobile based on deep learning |
US11449832B2 (en) * | 2019-05-17 | 2022-09-20 | Allstate Insurance Company | Systems and methods for obtaining data annotations |
US11538146B2 (en) * | 2019-12-19 | 2022-12-27 | Qeexo, Co. | Automated machine vision-based defect detection |
US20210241208A1 (en) * | 2020-01-31 | 2021-08-05 | Capital One Services, Llc | Method and system for identifying and onboarding a vehicle into inventory |
US11532098B2 (en) * | 2020-09-01 | 2022-12-20 | Ford Global Technologies, Llc | Determining multi-degree-of-freedom pose to navigate a vehicle |
US12125002B2 (en) * | 2021-11-12 | 2024-10-22 | State Farm Mutual Automobile Insurance Company | Systems and methods of determining vehicle reparability |
US20230245239A1 (en) * | 2022-01-28 | 2023-08-03 | Allstate Insurance Company | Systems and methods for modeling item damage severity |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150106133A1 (en) * | 2013-10-15 | 2015-04-16 | Audatex North America, Inc. | Mobile system for generating a damaged vehicle insurance estimate |
US10319094B1 (en) * | 2016-05-20 | 2019-06-11 | Ccc Information Services Inc. | Technology for capturing, transmitting, and analyzing images of objects |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140229207A1 (en) | 2011-09-29 | 2014-08-14 | Tata Consultancy Services Limited | Damage assessment of an object |
JP6153366B2 (en) * | 2013-03-29 | 2017-06-28 | 株式会社バンダイナムコエンターテインメント | Image generation system and program |
US9824453B1 (en) * | 2015-10-14 | 2017-11-21 | Allstate Insurance Company | Three dimensional image scan for vehicle |
US11144889B2 (en) * | 2016-04-06 | 2021-10-12 | American International Group, Inc. | Automatic assessment of damage and repair costs in vehicles |
GB2554361B8 (en) * | 2016-09-21 | 2022-07-06 | Emergent Network Intelligence Ltd | Automatic image based object damage assessment |
-
2018
- 2018-05-07 US US15/973,343 patent/US11144889B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150106133A1 (en) * | 2013-10-15 | 2015-04-16 | Audatex North America, Inc. | Mobile system for generating a damaged vehicle insurance estimate |
US10319094B1 (en) * | 2016-05-20 | 2019-06-11 | Ccc Information Services Inc. | Technology for capturing, transmitting, and analyzing images of objects |
Cited By (386)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11783428B2 (en) | 2012-08-16 | 2023-10-10 | Allstate Insurance Company | Agent-facilitated claims damage estimation |
US11367144B2 (en) | 2012-08-16 | 2022-06-21 | Allstate Insurance Company | Agent-facilitated claims damage estimation |
US12175540B2 (en) | 2012-08-16 | 2024-12-24 | Allstate Insurance Company | Processing insured items holistically with mobile damage assessment and claims processing |
US12182881B2 (en) | 2012-08-16 | 2024-12-31 | Allstate Insurance Company | User devices in claims damage estimation |
US11361385B2 (en) | 2012-08-16 | 2022-06-14 | Allstate Insurance Company | Application facilitated claims damage estimation |
US10878507B1 (en) | 2012-08-16 | 2020-12-29 | Allstate Insurance Company | Feedback loop in mobile damage assessment and claims processing |
US11455691B2 (en) | 2012-08-16 | 2022-09-27 | Allstate Insurance Company | Processing insured items holistically with mobile damage assessment and claims processing |
US10685400B1 (en) * | 2012-08-16 | 2020-06-16 | Allstate Insurance Company | Feedback loop in mobile damage assessment and claims processing |
US11386503B2 (en) | 2012-08-16 | 2022-07-12 | Allstate Insurance Company | Processing insured items holistically with mobile damage assessment and claims processing |
US12315018B2 (en) | 2012-08-16 | 2025-05-27 | Allstate Insurance Company | Configuration and transfer of image data using a mobile device |
US11625791B1 (en) | 2012-08-16 | 2023-04-11 | Allstate Insurance Company | Feedback loop in mobile damage assessment and claims processing |
US11915321B2 (en) | 2012-08-16 | 2024-02-27 | Allstate Insurance Company | Configuration and transfer of image data using a mobile device |
US11580605B2 (en) | 2012-08-16 | 2023-02-14 | Allstate Insurance Company | Feedback loop in mobile damage assessment and claims processing |
US12079877B2 (en) | 2012-08-16 | 2024-09-03 | Allstate Insurance Company | Processing insured items holistically with mobile damage assessment and claims processing |
US11532049B2 (en) | 2012-08-16 | 2022-12-20 | Allstate Insurance Company | Configuration and transfer of image data using a mobile device |
US11532048B2 (en) | 2012-08-16 | 2022-12-20 | Allstate Insurance Company | User interactions in mobile damage assessment and claims processing |
US10803532B1 (en) | 2012-08-16 | 2020-10-13 | Allstate Insurance Company | Processing insured items holistically with mobile damage assessment and claims processing |
US11403713B2 (en) | 2012-08-16 | 2022-08-02 | Allstate Insurance Company | Configuration and transfer of image data using a mobile device |
US12079878B2 (en) | 2012-08-16 | 2024-09-03 | Allstate Insurance Company | Feedback loop in mobile damage assessment and claims processing |
US11461849B2 (en) | 2012-09-10 | 2022-10-04 | Allstate Insurance Company | Recommendation of insurance products based on an inventory analysis |
US11798088B1 (en) | 2012-09-10 | 2023-10-24 | Allstate Insurance Company | Optimized inventory analysis for insurance purposes |
US11598900B2 (en) | 2015-04-14 | 2023-03-07 | Utopus Insights, Inc. | Weather-driven multi-category infrastructure impact forecasting |
US10989838B2 (en) | 2015-04-14 | 2021-04-27 | Utopus Insights, Inc. | Weather-driven multi-category infrastructure impact forecasting |
US11048021B2 (en) * | 2015-04-14 | 2021-06-29 | Utopus Insights, Inc. | Weather-driven multi-category infrastructure impact forecasting |
US11144889B2 (en) * | 2016-04-06 | 2021-10-12 | American International Group, Inc. | Automatic assessment of damage and repair costs in vehicles |
US11443288B2 (en) | 2016-04-06 | 2022-09-13 | American International Group, Inc. | Automatic assessment of damage and repair costs in vehicles |
US10839250B2 (en) * | 2016-05-09 | 2020-11-17 | Uesse S.R.L. | Process and system for computing the cost of usable and consumable materials for painting of motor vehicles, from analysis of deformations in motor vehicles |
US20190188523A1 (en) * | 2016-05-09 | 2019-06-20 | Uesse S.R.L. | Process and System for Computing the Cost of Usable and Consumable Materials for Painting of Motor Vehicles, From Analysis of Deformations in Motor Vehicles |
US11062473B2 (en) * | 2016-12-02 | 2021-07-13 | Gabriel Fine | Automatically determining orientation and position of medically invasive devices via image processing |
US11625850B2 (en) * | 2016-12-02 | 2023-04-11 | Gabriel Fine | System for guiding medically invasive devices relative to other medical devices via image processing |
US20210133999A1 (en) * | 2016-12-02 | 2021-05-06 | Gabriel Fine | Augmenting image data of medically invasive devices having non-medical structures |
US20210110570A1 (en) * | 2016-12-02 | 2021-04-15 | Gabriel Fine | System for guiding medically invasive devices relative to anatomical structures via image processing |
US20210110569A1 (en) * | 2016-12-02 | 2021-04-15 | Gabriel Fine | System for guiding medically invasive devices relative to other medical devices via image processing |
US11625849B2 (en) * | 2016-12-02 | 2023-04-11 | Gabriel Fine | Automatically determining orientation and position of medically invasive devices via image processing |
US11663525B2 (en) * | 2016-12-02 | 2023-05-30 | Gabriel Fine | Augmenting unlabeled images of medically invasive devices via image processing |
US20210142506A1 (en) * | 2016-12-02 | 2021-05-13 | Gabriel Fine | Guiding medically invasive devices with radiation absorbing markers via image processing |
US11657329B2 (en) * | 2016-12-02 | 2023-05-23 | Gabriel Fine | Augmenting image data of medically invasive devices having non-medical structures |
US20250190875A1 (en) * | 2016-12-02 | 2025-06-12 | Gabriel Fine | Image-based detection of internal object |
US10529088B2 (en) * | 2016-12-02 | 2020-01-07 | Gabriel Fine | Automatically determining orientation and position of medically invasive devices via image processing |
US20210110568A1 (en) * | 2016-12-02 | 2021-04-15 | Gabriel Fine | Displaying augmented image data for medically invasive devices via image processing |
US20210110567A1 (en) * | 2016-12-02 | 2021-04-15 | Gabriel Fine | Automatically determining orientation and position of medically invasive devices via image processing |
US11657331B2 (en) * | 2016-12-02 | 2023-05-23 | Gabriel Fine | Guiding medically invasive devices with radiation absorbing markers via image processing |
US20210110566A1 (en) * | 2016-12-02 | 2021-04-15 | Gabriel Fine | System for augmenting image data of medically invasive devices using optical imaging |
US20250165870A1 (en) * | 2016-12-02 | 2025-05-22 | Gabriel Fine | Image-based detection of object condition |
US20210133998A1 (en) * | 2016-12-02 | 2021-05-06 | Gabriel Fine | Augmenting unlabeled images of medically invasive devices via image processing |
US20230289664A1 (en) * | 2016-12-02 | 2023-09-14 | Gabriel Fine | System for monitoring object inserted into patient's body via image processing |
US12242935B2 (en) * | 2016-12-02 | 2025-03-04 | Gabriel Fine | System for monitoring object inserted into patient's body via image processing |
US11657330B2 (en) * | 2016-12-02 | 2023-05-23 | Gabriel Fine | System for guiding medically invasive devices relative to anatomical structures via image processing |
US20250173622A1 (en) * | 2016-12-02 | 2025-05-29 | Gabriel Fine | Presurgical planning |
US20180158209A1 (en) * | 2016-12-02 | 2018-06-07 | Gabriel Fine | Automatically determining orientation and position of medically invasive devices via image processing |
US11681952B2 (en) * | 2016-12-02 | 2023-06-20 | Gabriel Fine | System for augmenting image data of medically invasive devices using optical imaging |
US11687834B2 (en) * | 2016-12-02 | 2023-06-27 | Gabriel Fine | Displaying augmented image data for medically invasive devices via image processing |
US12229938B2 (en) | 2016-12-23 | 2025-02-18 | State Farm Mutual Automobile Insurance Company | Systems and methods for utilizing machine-assisted vehicle inspection to identify insurance buildup or fraud |
US11107306B1 (en) | 2016-12-23 | 2021-08-31 | State Farm Mutual Automobile Insurance Company | Systems and methods for machine-assisted vehicle inspection |
US10825097B1 (en) * | 2016-12-23 | 2020-11-03 | State Farm Mutual Automobile Insurance Company | Systems and methods for utilizing machine-assisted vehicle inspection to identify insurance buildup or fraud |
US11508054B2 (en) | 2016-12-23 | 2022-11-22 | State Farm Mutual Automobile Insurance Company | Systems and methods for utilizing machine-assisted vehicle inspection to identify insurance buildup or fraud |
US11854181B2 (en) | 2016-12-23 | 2023-12-26 | State Farm Mutual Automobile Insurance Company | Systems and methods for utilizing machine-assisted vehicle inspection to identify insurance buildup or fraud |
US11080841B1 (en) | 2016-12-23 | 2021-08-03 | State Farm Mutual Automobile Insurance Company | Systems and methods for machine-assisted vehicle inspection |
US10628890B2 (en) * | 2017-02-23 | 2020-04-21 | International Business Machines Corporation | Visual analytics based vehicle insurance anti-fraud detection |
US10789786B2 (en) | 2017-04-11 | 2020-09-29 | Alibaba Group Holding Limited | Picture-based vehicle loss assessment |
US20180293552A1 (en) * | 2017-04-11 | 2018-10-11 | Alibaba Group Holding Limited | Image-based vehicle maintenance plan |
US11049334B2 (en) | 2017-04-11 | 2021-06-29 | Advanced New Technologies Co., Ltd. | Picture-based vehicle loss assessment |
US20190213563A1 (en) * | 2017-04-11 | 2019-07-11 | Alibaba Group Holding Limited | Image-based vehicle maintenance plan |
US10817956B2 (en) | 2017-04-11 | 2020-10-27 | Alibaba Group Holding Limited | Image-based vehicle damage determining method and apparatus, and electronic device |
US20180349741A1 (en) * | 2017-05-31 | 2018-12-06 | Fujitsu Limited | Computer-readable recording medium, learning method, and object detection device |
US10803357B2 (en) * | 2017-05-31 | 2020-10-13 | Fujitsu Limited | Computer-readable recording medium, training method, and object detection device |
US11610074B1 (en) * | 2017-06-29 | 2023-03-21 | State Farm Mutual Automobile Insurance Company | Deep learning image processing method for determining vehicle damage |
US12008658B2 (en) * | 2017-06-29 | 2024-06-11 | State Farm Mutual Automobile Insurance Company | Deep learning image processing method for determining vehicle damage |
US11200513B2 (en) * | 2017-10-13 | 2021-12-14 | Carrier Corporation | Real estate image analysis |
US20190114597A1 (en) * | 2017-10-16 | 2019-04-18 | Mitchell International, Inc. | Methods for predictive estimation of repair lines based on historical data and devices thereof |
US10803328B1 (en) * | 2017-11-15 | 2020-10-13 | Uatc, Llc | Semantic and instance segmentation |
US11005961B2 (en) * | 2017-11-20 | 2021-05-11 | Marc Berger | Ad-hoc low power low cost communication via a network of electronic stickers |
US12299871B2 (en) | 2017-11-22 | 2025-05-13 | State Farm Mutual Automobile Insurance Company | Guided vehicle capture for virtual mode generation |
US11922618B2 (en) | 2017-11-22 | 2024-03-05 | State Farm Mutual Automobile Insurance Company | Guided vehicle capture for virtual model generation |
US11315239B1 (en) * | 2017-11-22 | 2022-04-26 | State Farm Mutual Automobile Insurance Company | Guided vehicle capture for virtual mode generation |
US20180121888A1 (en) * | 2017-12-20 | 2018-05-03 | Patrick Richard O'Reilly | System and method for improved vehicle collision damage estimating and repair |
US11720572B2 (en) | 2018-01-08 | 2023-08-08 | Advanced New Technologies Co., Ltd. | Method and system for content recommendation |
US11978000B2 (en) | 2018-02-01 | 2024-05-07 | Advanced New Technologies Co., Ltd. | System and method for determining a decision-making strategy |
US11461872B1 (en) | 2018-03-02 | 2022-10-04 | Autodata Solutions, Inc. | Method and system for vehicle image repositioning |
US11270168B1 (en) * | 2018-03-02 | 2022-03-08 | Autodata Solutions, Inc. | Method and system for vehicle image classification |
US12299719B2 (en) * | 2018-05-04 | 2025-05-13 | Allstate Insurance Company | Processing system having a machine learning engine for providing a surface dimension output |
US11436648B1 (en) * | 2018-05-04 | 2022-09-06 | Allstate Insurance Company | Processing system having a machine learning engine for providing a surface dimension output |
US12333581B2 (en) | 2018-05-04 | 2025-06-17 | Allstate Insurance Company | Processing system having a machine learning engine for providing a surface dimension output |
US11257132B1 (en) * | 2018-05-04 | 2022-02-22 | Allstate Insurance Company | Processing systems and methods having a machine learning engine for providing a surface dimension output |
US20220405816A1 (en) * | 2018-05-04 | 2022-12-22 | Allstate Insurance Company | Processing system having a machine learning engine for providing a surface dimension output |
US10733723B2 (en) * | 2018-05-22 | 2020-08-04 | Midea Group Co., Ltd. | Methods and system for improved quality inspection |
US20190362480A1 (en) * | 2018-05-22 | 2019-11-28 | Midea Group Co., Ltd. | Methods and system for improved quality inspection |
US11386328B2 (en) * | 2018-05-30 | 2022-07-12 | Robert Bosch Gmbh | Method, apparatus and computer program for generating robust automated learning systems and testing trained automated learning systems |
US11410549B2 (en) * | 2018-05-31 | 2022-08-09 | Boe Technology Group Co., Ltd. | Method, device, readable medium and electronic device for identifying traffic light signal |
US12039578B2 (en) * | 2018-06-15 | 2024-07-16 | State Farm Mutual Automobile Insurance Company | Methods and systems for automatic processing of images of a damaged vehicle and estimating a repair cost |
US20220114627A1 (en) * | 2018-06-15 | 2022-04-14 | State Farm Mutual Automobile Insurance Company | Methods and systems for automatic processing of images of a damaged vehicle and estimating a repair cost |
US11070763B2 (en) * | 2018-06-27 | 2021-07-20 | Snap-On Incorporated | Method and system for displaying images captured by a computing device including a visible light camera and a thermal camera |
US12026602B1 (en) * | 2018-08-01 | 2024-07-02 | State Farm Mutual Automobile Insurance Company | Vehicle damage claims self-service |
US11790632B2 (en) * | 2018-08-24 | 2023-10-17 | Advanced New Technologies Co., Ltd. | Method and apparatus for sample labeling, and method and apparatus for identifying damage classification |
US20210124967A1 (en) * | 2018-08-24 | 2021-04-29 | Advanced New Technologies Co., Ltd. | Method and apparatus for sample labeling, and method and apparatus for identifying damage classification |
US11010838B2 (en) * | 2018-08-31 | 2021-05-18 | Advanced New Technologies Co., Ltd. | System and method for optimizing damage detection results |
US11475660B2 (en) | 2018-08-31 | 2022-10-18 | Advanced New Technologies Co., Ltd. | Method and system for facilitating recognition of vehicle parts based on a neural network |
US11080839B2 (en) * | 2018-08-31 | 2021-08-03 | Advanced New Technologies Co., Ltd. | System and method for training a damage identification model |
US11748399B2 (en) | 2018-08-31 | 2023-09-05 | Advanced New Technologies Co., Ltd. | System and method for training a damage identification model |
US20210133501A1 (en) * | 2018-09-04 | 2021-05-06 | Advanced New Technologies Co., Ltd. | Method and apparatus for generating vehicle damage image on the basis of gan network |
US10565476B1 (en) * | 2018-09-04 | 2020-02-18 | StradVision, Inc. | Method and computing device for generating image data set for learning to be used for detection of obstruction in autonomous driving circumstances and learning method and learning device using the same |
US11972599B2 (en) * | 2018-09-04 | 2024-04-30 | Advanced New Technologies Co., Ltd. | Method and apparatus for generating vehicle damage image on the basis of GAN network |
US11042978B2 (en) * | 2018-09-10 | 2021-06-22 | Advanced New Technologies Co., Ltd. | Method and apparatus for performing damage segmentation on vehicle damage image |
US10853699B2 (en) * | 2018-09-18 | 2020-12-01 | Advanced New Technologies Co., Ltd. | Method and apparatus for vehicle damage identification |
TWI703510B (en) * | 2018-09-18 | 2020-09-01 | 香港商阿里巴巴集團服務有限公司 | Vehicle damage identification method, device and computing equipment |
US20200089990A1 (en) * | 2018-09-18 | 2020-03-19 | Alibaba Group Holding Limited | Method and apparatus for vehicle damage identification |
US20200167594A1 (en) * | 2018-09-18 | 2020-05-28 | Alibaba Group Holding Limited | Method and apparatus for vehicle damage identification |
US10691982B2 (en) * | 2018-09-18 | 2020-06-23 | Alibaba Group Holding Limited | Method and apparatus for vehicle damage identification |
CN109215119A (en) * | 2018-09-18 | 2019-01-15 | 阿里巴巴集团控股有限公司 | The three-dimension modeling method and device of damaged vehicle |
US11150875B2 (en) * | 2018-09-27 | 2021-10-19 | Microsoft Technology Licensing, Llc | Automated content editor |
US20200111061A1 (en) * | 2018-10-03 | 2020-04-09 | Solera Holdings, Inc. | Apparatus and Method for Combined Visual Intelligence |
CN109410218A (en) * | 2018-10-08 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating vehicle damage information |
US11244435B2 (en) * | 2018-10-08 | 2022-02-08 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for generating vehicle damage information |
US11392792B2 (en) * | 2018-10-08 | 2022-07-19 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for generating vehicle damage information |
EP3594897A1 (en) * | 2018-10-11 | 2020-01-15 | Baidu Online Network Technology (Beijing) Co., Ltd. | Measuring method and apparatus for damaged part of vehicle |
CN109544623A (en) * | 2018-10-11 | 2019-03-29 | 百度在线网络技术(北京)有限公司 | The measurement method and device in vehicle damage region |
US11043000B2 (en) * | 2018-10-11 | 2021-06-22 | Baidu Online Network Technology Co., Ltd. | Measuring method and apparatus for damaged part of vehicle |
US11580800B2 (en) * | 2018-11-08 | 2023-02-14 | Verizon Patent And Licensing Inc. | Computer vision based vehicle inspection report automation |
US20200151974A1 (en) * | 2018-11-08 | 2020-05-14 | Verizon Patent And Licensing Inc. | Computer vision based vehicle inspection report automation |
US10970599B2 (en) * | 2018-11-15 | 2021-04-06 | Adobe Inc. | Learning copy space using regression and segmentation neural networks |
US11605168B2 (en) | 2018-11-15 | 2023-03-14 | Adobe Inc. | Learning copy space using regression and segmentation neural networks |
US11669790B2 (en) * | 2018-11-21 | 2023-06-06 | Enlitic, Inc. | Intensity transform augmentation system and methods for use therewith |
US20210295966A1 (en) * | 2018-11-21 | 2021-09-23 | Enlitic, Inc. | Intensity transform augmentation system and methods for use therewith |
CN109614914A (en) * | 2018-12-05 | 2019-04-12 | 北京纵目安驰智能科技有限公司 | Parking space vertex positioning method, device and storage medium |
CN109614935A (en) * | 2018-12-12 | 2019-04-12 | 泰康保险集团股份有限公司 | Car damage identification method and device, storage medium and electronic equipment |
CN109685780A (en) * | 2018-12-17 | 2019-04-26 | 河海大学 | A kind of Retail commodity recognition methods based on convolutional neural networks |
US10489683B1 (en) * | 2018-12-17 | 2019-11-26 | Bodygram, Inc. | Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks |
US20200211303A1 (en) * | 2018-12-26 | 2020-07-02 | Allstate Insurance Company | Systems and methods for system generated damage analysis |
US12322219B2 (en) * | 2018-12-26 | 2025-06-03 | Allstate Insurance Company | System generated damage analysis using scene templates |
US11741763B2 (en) * | 2018-12-26 | 2023-08-29 | Allstate Insurance Company | Systems and methods for system generated damage analysis |
US20210142590A1 (en) * | 2018-12-26 | 2021-05-13 | Allstate Insurance Company | System generated damage analysis using scene templates |
US20230410577A1 (en) * | 2018-12-26 | 2023-12-21 | Allstate Insurance Company | Systems and methods for system generated damage analysis |
US11443192B2 (en) * | 2018-12-29 | 2022-09-13 | Dassault Systemes | Machine-learning for 3D modeled object inference |
US11562207B2 (en) | 2018-12-29 | 2023-01-24 | Dassault Systemes | Set of neural networks |
US11887064B2 (en) * | 2018-12-31 | 2024-01-30 | Agilesoda Inc. | Deep learning-based system and method for automatically determining degree of damage to each area of vehicle |
US20210327042A1 (en) * | 2018-12-31 | 2021-10-21 | Agilesoda Inc. | Deep learning-based system and method for automatically determining degree of damage to each area of vehicle |
US20220114561A1 (en) * | 2019-01-04 | 2022-04-14 | Robert Lee Watts | Systems and methods for repair of vehicle body damage |
US11790326B2 (en) * | 2019-01-04 | 2023-10-17 | Robert Lee Watts | Systems and methods for repair of vehicle body damage |
US11049233B2 (en) * | 2019-01-14 | 2021-06-29 | Ford Global Technologies, Llc | Systems and methods for detecting and reporting vehicle damage events |
US11676030B2 (en) * | 2019-01-17 | 2023-06-13 | Fujitsu Limited | Learning method, learning apparatus, and computer-readable recording medium |
US11620530B2 (en) * | 2019-01-17 | 2023-04-04 | Fujitsu Limited | Learning method, and learning apparatus, and recording medium |
US11409988B2 (en) * | 2019-01-17 | 2022-08-09 | Fujitsu Limited | Method, recording medium, and device for utilizing feature quantities of augmented training data |
US11386401B2 (en) * | 2019-01-20 | 2022-07-12 | Mitchell Repair Information Company, Llc | Methods and systems to provide packages of repair information based on component identifiers |
US12131502B2 (en) | 2019-01-22 | 2024-10-29 | Fyusion, Inc. | Object pose estimation in visual data |
US20200234488A1 (en) * | 2019-01-22 | 2020-07-23 | Fyusion, Inc. | Damage detection from multi-view visual data |
US11748907B2 (en) | 2019-01-22 | 2023-09-05 | Fyusion, Inc. | Object pose estimation in visual data |
US12243170B2 (en) | 2019-01-22 | 2025-03-04 | Fyusion, Inc. | Live in-camera overlays |
US11783443B2 (en) | 2019-01-22 | 2023-10-10 | Fyusion, Inc. | Extraction of standardized images from a single view or multi-view capture |
US11475626B2 (en) | 2019-01-22 | 2022-10-18 | Fyusion, Inc. | Damage detection from multi-view visual data |
US11727626B2 (en) * | 2019-01-22 | 2023-08-15 | Fyusion, Inc. | Damage detection from multi-view visual data |
US11354851B2 (en) | 2019-01-22 | 2022-06-07 | Fyusion, Inc. | Damage detection from multi-view visual data |
US10887582B2 (en) * | 2019-01-22 | 2021-01-05 | Fyusion, Inc. | Object damage aggregation |
US11176704B2 (en) | 2019-01-22 | 2021-11-16 | Fyusion, Inc. | Object pose estimation in visual data |
US11989822B2 (en) | 2019-01-22 | 2024-05-21 | Fyusion, Inc. | Damage detection from multi-view visual data |
US12203872B2 (en) * | 2019-01-22 | 2025-01-21 | Fyusion, Inc. | Damage detection from multi-view visual data |
EP3915050A4 (en) * | 2019-01-22 | 2022-09-14 | Fyusion, Inc. | Damage detection from multi-view visual data |
US12204869B2 (en) | 2019-01-22 | 2025-01-21 | Fyusion, Inc. | Natural language understanding for visual tagging |
US20210312702A1 (en) * | 2019-01-22 | 2021-10-07 | Fyusion, Inc. | Damage detection from multi-view visual data |
US10950033B2 (en) * | 2019-01-22 | 2021-03-16 | Fyusion, Inc. | Damage detection from multi-view visual data |
US20200236343A1 (en) * | 2019-01-22 | 2020-07-23 | Fyusion, Inc. | Object damage aggregation |
US10410352B1 (en) * | 2019-01-25 | 2019-09-10 | StradVision, Inc. | Learning method and learning device for improving segmentation performance to be used for detecting events including pedestrian event, vehicle event, falling event and fallen event using edge loss and test method and test device using the same |
US10402977B1 (en) * | 2019-01-25 | 2019-09-03 | StradVision, Inc. | Learning method and learning device for improving segmentation performance in road obstacle detection required to satisfy level 4 and level 5 of autonomous vehicles using laplacian pyramid network and testing method and testing device using the same |
US12393987B1 (en) * | 2019-03-08 | 2025-08-19 | State Farm Mutual Automobile Insurance Company | Methods and apparatus for automated insurance claim processing using historical data |
CN110287760A (en) * | 2019-03-28 | 2019-09-27 | 电子科技大学 | A method for occlusion detection of facial facial features based on deep learning |
US11475246B2 (en) * | 2019-04-02 | 2022-10-18 | Synthesis Ai, Inc. | System and method for generating training data for computer vision systems based on image segmentation |
US11455496B2 (en) | 2019-04-02 | 2022-09-27 | Synthesis Ai, Inc. | System and method for domain adaptation using synthetic data |
US11455495B2 (en) | 2019-04-02 | 2022-09-27 | Synthesis Ai, Inc. | System and method for visual recognition using synthetic training data |
US12045893B2 (en) * | 2019-04-17 | 2024-07-23 | State Farm Mutual Automobile Insurance Company | Method and system for early identification and settlement of total loss claims |
US20230135121A1 (en) * | 2019-04-17 | 2023-05-04 | State Farm Mutual Automobile Insurance Company | Method and system for early identification and settlement of total loss claims |
US11574366B1 (en) * | 2019-04-17 | 2023-02-07 | State Farm Mutual Automobile Insurance Company | Method and system for early identification and settlement of total loss claims |
US20200342304A1 (en) * | 2019-04-25 | 2020-10-29 | International Business Machines Corporation | Feature importance identification in deep learning models |
US11538286B2 (en) * | 2019-05-06 | 2022-12-27 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for vehicle damage assessment, electronic device, and computer storage medium |
US11669809B1 (en) | 2019-05-09 | 2023-06-06 | Ccc Intelligent Solutions Inc. | Intelligent vehicle repair estimation system |
US10949814B1 (en) * | 2019-05-09 | 2021-03-16 | Ccc Information Services Inc. | Intelligent vehicle repair estimation system |
US10783643B1 (en) * | 2019-05-27 | 2020-09-22 | Alibaba Group Holding Limited | Segmentation-based damage detection |
US11004204B2 (en) | 2019-05-27 | 2021-05-11 | Advanced New Technologies Co., Ltd. | Segmentation-based damage detection |
WO2020254700A1 (en) * | 2019-06-17 | 2020-12-24 | Quibim, S.L. | Method and system for identifying anomalies in x-rays |
US12159445B2 (en) | 2019-06-17 | 2024-12-03 | Quibim, S.L. | Method and system for identifying anomalies in X-rays |
ES2799828A1 (en) * | 2019-06-17 | 2020-12-21 | Quibim S L | METHOD AND SYSTEM TO IDENTIFY ANOMALIES IN X-RAYS (Machine-translation by Google Translate, not legally binding) |
US11587315B2 (en) | 2019-06-19 | 2023-02-21 | Deere & Company | Apparatus and methods for augmented reality measuring of equipment |
US11580628B2 (en) | 2019-06-19 | 2023-02-14 | Deere & Company | Apparatus and methods for augmented reality vehicle condition inspection |
EP3754603A1 (en) * | 2019-06-19 | 2020-12-23 | Deere & Company | Apparatus and methods for augmented reality vehicle condition inspection |
US11544813B2 (en) * | 2019-06-27 | 2023-01-03 | Samsung Electronics Co., Ltd. | Artificial neural network model and electronic device including the same |
CN112149793A (en) * | 2019-06-27 | 2020-12-29 | 三星电子株式会社 | Artificial neural network model and electronic device including the same |
US20220261830A1 (en) * | 2019-06-28 | 2022-08-18 | Fair Ip, Llc | Machine learning engine for demand-based pricing |
US12236671B2 (en) * | 2019-07-03 | 2025-02-25 | Ocado Innovation Limited | Damage detection apparatus and method |
US20220358753A1 (en) * | 2019-07-03 | 2022-11-10 | Ocado Innovation Limited | Damage Detection Apparatus and Method |
CN114041150A (en) * | 2019-07-03 | 2022-02-11 | 奥卡多创新有限公司 | Damage detection apparatus and method |
WO2021001337A1 (en) * | 2019-07-03 | 2021-01-07 | Ocado Innovation Limited | A damage detection apparatus and method |
CN110472656A (en) * | 2019-07-03 | 2019-11-19 | 平安科技(深圳)有限公司 | Vehicle image classification method, device, computer equipment and storage medium |
US12118752B2 (en) | 2019-07-22 | 2024-10-15 | Adobe Inc. | Determining colors of objects in digital images |
US12093306B2 (en) | 2019-07-22 | 2024-09-17 | Adobe Inc. | Automatically detecting user-requested objects in digital images |
US12020414B2 (en) | 2019-07-22 | 2024-06-25 | Adobe Inc. | Utilizing deep neural networks to automatically select instances of detected objects in images |
US11631234B2 (en) | 2019-07-22 | 2023-04-18 | Adobe, Inc. | Automatically detecting user-requested objects in images |
US11797847B2 (en) | 2019-07-22 | 2023-10-24 | Adobe Inc. | Selecting instances of detected objects in images utilizing object detection models |
US12244784B2 (en) | 2019-07-29 | 2025-03-04 | Fyusion, Inc. | Multiview interactive digital media representation inventory verification |
WO2021022094A1 (en) * | 2019-07-30 | 2021-02-04 | Dolby Laboratories Licensing Corporation | Per-epoch data augmentation for training acoustic models |
US11373313B2 (en) * | 2019-08-08 | 2022-06-28 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US11410287B2 (en) * | 2019-09-09 | 2022-08-09 | Genpact Luxembourg S.à r.l. II | System and method for artificial intelligence based determination of damage to physical structures |
EP4028977A4 (en) * | 2019-09-09 | 2023-09-27 | Neural Claim System, Inc. | Methods and systems for submitting and/or processing insurance claims for damaged motor vehicle glass |
US20220205224A1 (en) * | 2019-09-19 | 2022-06-30 | Sumitomo Heavy Industries, Ltd. | Excavator and management apparatus for excavator |
US12345022B2 (en) * | 2019-09-19 | 2025-07-01 | Sumitomo Heavy Industries, Ltd. | Excavator and management apparatus for excavator |
US20230410282A1 (en) * | 2019-09-22 | 2023-12-21 | Openlane, Inc. | Vehicle self-inspection apparatus and method |
WO2021055988A1 (en) | 2019-09-22 | 2021-03-25 | Kar Auction Services, Inc. | Vehicle self-inspection apparatus and method |
US11721010B2 (en) * | 2019-09-22 | 2023-08-08 | Openlane, Inc. | Vehicle self-inspection apparatus and method |
US20210090240A1 (en) * | 2019-09-22 | 2021-03-25 | Kar Auction Services, Inc. | Vehicle self-inspection apparatus and method |
EP4032041A4 (en) * | 2019-09-22 | 2023-09-13 | Kar Auction Services, Inc. | DEVICE AND METHOD FOR SELF-TESTING VEHICLES |
US11823137B2 (en) | 2019-09-30 | 2023-11-21 | Mitchell International, Inc. | Automated vehicle repair estimation by voting ensembling of multiple artificial intelligence functions |
US11188853B2 (en) * | 2019-09-30 | 2021-11-30 | The Travelers Indemnity Company | Systems and methods for artificial intelligence (AI) damage triage and dynamic resource allocation, routing, and scheduling |
US11887063B2 (en) | 2019-09-30 | 2024-01-30 | Mitchell International, Inc. | Automated vehicle repair estimation by random ensembling of multiple artificial intelligence functions |
US12229732B2 (en) | 2019-09-30 | 2025-02-18 | Mitchell International, Inc. | Automated vehicle repair estimation by voting ensembling of multiple artificial intelligence functions |
US11797952B2 (en) | 2019-09-30 | 2023-10-24 | Mitchell International, Inc. | Automated vehicle repair estimation by adaptive ensembling of multiple artificial intelligence functions |
US11836684B2 (en) * | 2019-09-30 | 2023-12-05 | Mitchell International, Inc. | Automated vehicle repair estimation by preferential ensembling of multiple artificial intelligence functions |
US20210097505A1 (en) * | 2019-09-30 | 2021-04-01 | Mitchell International, Inc. | Automated vehicle repair estimation by preferential ensembling of multiple artificial intelligence functions |
CN110837789A (en) * | 2019-10-31 | 2020-02-25 | 北京奇艺世纪科技有限公司 | Method and device for detecting object, electronic equipment and medium |
US11328542B2 (en) * | 2019-11-01 | 2022-05-10 | Fu Tai Hua Industry (Shenzhen) Co., Ltd. | Method for reporting faults in shareable vehicles and parking device employing the method |
US20230394092A1 (en) * | 2019-11-15 | 2023-12-07 | Capital One Services, Llc | Vehicle inventory search recommendation using image analysis driven by machine learning |
US20210334319A1 (en) * | 2019-11-15 | 2021-10-28 | Capital One Services, Llc | Vehicle inventory search recommendation using image analysis driven by machine learning |
US11068549B2 (en) * | 2019-11-15 | 2021-07-20 | Capital One Services, Llc | Vehicle inventory search recommendation using image analysis driven by machine learning |
US12204595B2 (en) * | 2019-11-15 | 2025-01-21 | Capital One Services, Llc | Vehicle inventory search recommendation using image analysis driven by machine learning |
US11775597B2 (en) * | 2019-11-15 | 2023-10-03 | Capital One Services, Llc | Vehicle inventory search recommendation using image analysis driven by machine learning |
US12374135B2 (en) * | 2019-12-02 | 2025-07-29 | Click-Ins, Ltd. | Systems, methods and programs for generating damage print in a vehicle |
US20230012230A1 (en) * | 2019-12-02 | 2023-01-12 | Click-Ins, Ltd. | Systems, methods and programs for generating damage print in a vehicle |
JP7371466B2 (en) | 2019-12-03 | 2023-10-31 | 京セラドキュメントソリューションズ株式会社 | Image processing device |
JP2021089512A (en) * | 2019-12-03 | 2021-06-10 | 京セラドキュメントソリューションズ株式会社 | Image processing device |
EP3839822A1 (en) * | 2019-12-16 | 2021-06-23 | Accenture Global Solutions Limited | Explainable artificial intelligence (ai) based image analytic, automatic damage detection and estimation system |
US11676365B2 (en) | 2019-12-16 | 2023-06-13 | Accenture Global Solutions Limited | Explainable artificial intelligence (AI) based image analytic, automatic damage detection and estimation system |
US10846716B1 (en) * | 2019-12-27 | 2020-11-24 | Capital One Services, Llc | System and method for facilitating training of a prediction model to estimate a user vehicle damage tolerance |
US20240193943A1 (en) * | 2019-12-30 | 2024-06-13 | Scope Technologies Holdings Limited | Systems and Methods for Automatedly Identifying, Documenting and Reporting Vehicle Damage |
US11915479B1 (en) | 2019-12-30 | 2024-02-27 | Scope Technologies Holdings Limited | Systems and methods for automatedly identifying, documenting and reporting vehicle damage |
US11257203B2 (en) | 2020-01-03 | 2022-02-22 | Tractable Ltd | Inconsistent damage determination |
WO2021136942A1 (en) * | 2020-01-03 | 2021-07-08 | Tractable Ltd | Method of determining damage to parts of a vehicle |
WO2021136943A1 (en) * | 2020-01-03 | 2021-07-08 | Tractable Ltd | Method of determining painting requirements for a damage vehicle |
US20210272271A1 (en) * | 2020-01-03 | 2021-09-02 | Tractable Ltd | Method of determining damage to parts of a vehicle |
US11636581B2 (en) | 2020-01-03 | 2023-04-25 | Tractable Limited | Undamaged/damaged determination |
US11257204B2 (en) | 2020-01-03 | 2022-02-22 | Tractable Ltd | Detailed damage determination with image segmentation |
US12136068B2 (en) * | 2020-01-03 | 2024-11-05 | Tractable Ltd | Paint refinish determination |
WO2021136947A1 (en) * | 2020-01-03 | 2021-07-08 | Tractable Ltd | Vehicle damage state determination method |
US11587221B2 (en) * | 2020-01-03 | 2023-02-21 | Tractable Limited | Detailed damage determination with image cropping |
US11244438B2 (en) | 2020-01-03 | 2022-02-08 | Tractable Ltd | Auxiliary parts damage determination |
US11250554B2 (en) | 2020-01-03 | 2022-02-15 | Tractable Ltd | Repair/replace and labour hours determination |
US11386543B2 (en) | 2020-01-03 | 2022-07-12 | Tractable Ltd | Universal car damage determination with make/model invariance |
US11361426B2 (en) | 2020-01-03 | 2022-06-14 | Tractable Ltd | Paint blending determination |
WO2021136938A1 (en) * | 2020-01-03 | 2021-07-08 | Tractable Ltd | Method of determining repair operations for a damaged vehicle including using domain confusion loss techniques |
US12333503B2 (en) | 2020-01-13 | 2025-06-17 | Capital One Services, Llc | Visualization of damage on images |
US11676113B2 (en) | 2020-01-13 | 2023-06-13 | Capital One Services, Llc | Visualization of damage on images |
US10970835B1 (en) * | 2020-01-13 | 2021-04-06 | Capital One Services, Llc | Visualization of damage on images |
US11620769B2 (en) * | 2020-01-14 | 2023-04-04 | Capital One Services, Llc | Vehicle information photo overlay |
US20210217208A1 (en) * | 2020-01-14 | 2021-07-15 | Capital One Services, Llc | Vehicle information photo overlay |
US11587224B2 (en) | 2020-01-14 | 2023-02-21 | Capital One Services, Llc | Vehicle listing image detection and alert system |
US11972556B2 (en) | 2020-01-16 | 2024-04-30 | Fyusion, Inc. | Mobile multi-camera multi-view capture |
US11776142B2 (en) | 2020-01-16 | 2023-10-03 | Fyusion, Inc. | Structuring visual data |
US12333710B2 (en) | 2020-01-16 | 2025-06-17 | Fyusion, Inc. | Mobile multi-camera multi-view capture |
US12073574B2 (en) | 2020-01-16 | 2024-08-27 | Fyusion, Inc. | Structuring visual data |
US11562474B2 (en) | 2020-01-16 | 2023-01-24 | Fyusion, Inc. | Mobile multi-camera multi-view capture |
US11138562B2 (en) * | 2020-01-17 | 2021-10-05 | Dell Products L.P. | Automatic processing of device damage claims using artificial intelligence |
US11710158B2 (en) | 2020-01-23 | 2023-07-25 | Ford Global Technologies, Llc | Vehicle damage estimation |
US20240013266A1 (en) * | 2020-01-31 | 2024-01-11 | Cognivision Inc. | Estimation apparatus, estimation system, and estimation method |
US11631165B2 (en) * | 2020-01-31 | 2023-04-18 | Sachcontrol Gmbh | Repair estimation based on images |
US12373894B2 (en) * | 2020-02-05 | 2025-07-29 | Fulpruf Technology Corporation | Vehicle supply chain tracking system |
US11461890B2 (en) * | 2020-02-05 | 2022-10-04 | Fulpruf Technology Corporation | Vehicle supply chain damage tracking system |
US20230111980A1 (en) * | 2020-02-05 | 2023-04-13 | Fulpruf Technology Corporation | Vehicle Supply Chain Damage Tracking System |
US11720969B2 (en) * | 2020-02-07 | 2023-08-08 | International Business Machines Corporation | Detecting vehicle identity and damage status using single video analysis |
US20210248681A1 (en) * | 2020-02-07 | 2021-08-12 | International Business Machines Corporation | Detecting vehicle identity and damage status using single video analysis |
CN111489433A (en) * | 2020-02-13 | 2020-08-04 | 北京百度网讯科技有限公司 | Vehicle damage positioning method and device, electronic equipment and readable storage medium |
US11886494B2 (en) | 2020-02-25 | 2024-01-30 | Adobe Inc. | Utilizing natural language processing automatically select objects in images |
WO2021175006A1 (en) * | 2020-03-04 | 2021-09-10 | 深圳壹账通智能科技有限公司 | Vehicle image detection method and apparatus, and computer device and storage medium |
US20210287530A1 (en) * | 2020-03-11 | 2021-09-16 | Allstate Insurance Company | Applying machine learning to telematics data to predict accident outcomes |
US11681919B2 (en) | 2020-03-12 | 2023-06-20 | Adobe Inc. | Automatically selecting query objects in digital images |
DE102020106752A1 (en) | 2020-03-12 | 2021-09-16 | Bayerische Motoren Werke Aktiengesellschaft | Device and method for training an image processing system |
US11798095B1 (en) | 2020-03-30 | 2023-10-24 | Allstate Insurance Company | Commercial claim processing platform using machine learning to generate shared economy insights |
CN113496526A (en) * | 2020-04-03 | 2021-10-12 | 发那科株式会社 | 3D gesture detection by multiple 2D cameras |
US11350078B2 (en) * | 2020-04-03 | 2022-05-31 | Fanuc Corporation | 3D pose detection by multiple 2D cameras |
US20210314551A1 (en) * | 2020-04-03 | 2021-10-07 | Fanuc Corporation | 3d pose detection by multiple 2d cameras |
US11935219B1 (en) * | 2020-04-10 | 2024-03-19 | Allstate Insurance Company | Systems and methods for automated property damage estimations and detection based on image analysis and neural network training |
CN111612741A (en) * | 2020-04-22 | 2020-09-01 | 杭州电子科技大学 | An accurate no-reference image quality assessment method based on distortion identification |
US11430042B2 (en) | 2020-04-30 | 2022-08-30 | Capital One Services, Llc | Methods and systems for providing a vehicle recommendation |
US12367777B2 (en) | 2020-05-08 | 2025-07-22 | The Travelers Indemnity Company | Systems and methods for autonomous hazardous area data collection |
US20210350713A1 (en) * | 2020-05-08 | 2021-11-11 | The Travelers Indemnity Company | Systems and methods for autonomous hazardous area data collection |
US11710411B2 (en) * | 2020-05-08 | 2023-07-25 | The Travelers Indemnity Company | Systems and methods for autonomous hazardous area data collection |
US11727551B2 (en) * | 2020-05-14 | 2023-08-15 | Ccc Information Services Inc. | Image processing system using recurrent neural networks |
US20210358105A1 (en) * | 2020-05-14 | 2021-11-18 | Ccc Information Services Inc. | Image processing system using recurrent neural networks |
US11587180B2 (en) * | 2020-05-14 | 2023-02-21 | Ccc Information Services Inc. | Image processing system |
US11514530B2 (en) * | 2020-05-14 | 2022-11-29 | Ccc Information Services Inc. | Image processing system using convolutional neural networks |
CN111612066A (en) * | 2020-05-21 | 2020-09-01 | 成都理工大学 | Remote sensing image classification method based on deep fusion convolutional neural network |
US20210374871A1 (en) * | 2020-05-28 | 2021-12-02 | Jonathan PYLE | Vehicle Repair Estimation System and Method |
US20230230172A1 (en) * | 2020-05-28 | 2023-07-20 | Jonathan PYLE | Vehicle Repair Estimation System and Method |
US12236488B2 (en) * | 2020-05-28 | 2025-02-25 | Jonathan PYLE | Vehicle repair estimation system and method |
US11704740B2 (en) * | 2020-05-28 | 2023-07-18 | Jonathan PYLE | Vehicle repair estimation system and method |
US20230306525A1 (en) * | 2020-05-28 | 2023-09-28 | Jonathan PYLE | Vehicle Repair Estimation System and Method |
JP2021189899A (en) * | 2020-06-02 | 2021-12-13 | 株式会社リクルート | Information processing equipment, information processing method, information processing program |
US20210406693A1 (en) * | 2020-06-25 | 2021-12-30 | Nxp B.V. | Data sample analysis in a dataset for a machine learning model |
CN111899169A (en) * | 2020-07-02 | 2020-11-06 | 佛山市南海区广工大数控装备协同创新研究院 | A method for segmentation network of face image based on semantic segmentation |
US20220028188A1 (en) * | 2020-07-21 | 2022-01-27 | CarDr.com | Mobile vehicle inspection system |
US12073557B2 (en) * | 2020-07-22 | 2024-08-27 | Crash Point Systems, Llc | Capturing vehicle data and assessing vehicle damage |
US20220028045A1 (en) * | 2020-07-22 | 2022-01-27 | Crash Point Systems, Llc | Capturing vehicle data and assessing vehicle damage |
US20230368369A1 (en) * | 2020-07-22 | 2023-11-16 | Crash Point Systems, Llc | Capturing vehicle data and assessing vehicle damage |
US11756184B2 (en) * | 2020-07-22 | 2023-09-12 | Crash Point Systems, Llc | Capturing vehicle data and assessing vehicle damage |
US11436755B2 (en) * | 2020-08-09 | 2022-09-06 | Google Llc | Real-time pose estimation for unseen objects |
US11138410B1 (en) * | 2020-08-25 | 2021-10-05 | Covar Applied Technologies, Inc. | 3-D object detection and classification from imagery |
US11727575B2 (en) * | 2020-08-25 | 2023-08-15 | Covar Llc | 3-D object detection and classification from imagery |
US20220067342A1 (en) * | 2020-08-25 | 2022-03-03 | Covar Applied Technologies, Inc. | 3-d object detection and classification from imagery |
US11488117B2 (en) * | 2020-08-27 | 2022-11-01 | Mitchell International, Inc. | Systems and methods for managing associations between damaged parts and non-reusable parts in a collision repair estimate |
CN112017065A (en) * | 2020-08-27 | 2020-12-01 | 中国平安财产保险股份有限公司 | Vehicle loss assessment and claim settlement method and device and computer readable storage medium |
US12229383B2 (en) | 2020-09-09 | 2025-02-18 | State Farm Mutual Automobile Insurance Company | Vehicular incident reenactment using three-dimensional (3D) representations |
US20220084234A1 (en) * | 2020-09-17 | 2022-03-17 | GIST(Gwangju Institute of Science and Technology) | Method and electronic device for identifying size of measurement target object |
US12380586B2 (en) * | 2020-09-17 | 2025-08-05 | GIST(Gwangju Institute of Science and Technology) | Method and electronic device for identifying size of measurement target object |
US11328402B2 (en) | 2020-09-29 | 2022-05-10 | Hong Kong Applied Science and Technology Research Institute Company Limited | Method and system of image based anomaly localization for vehicles through generative contextualized adversarial network |
US20240320751A1 (en) * | 2020-10-30 | 2024-09-26 | Tractable Ltd. | Remote vehicle damage assessment |
US20220138860A1 (en) * | 2020-10-30 | 2022-05-05 | Tractable Ltd | Remote Vehicle Damage Assessment |
WO2022094621A1 (en) * | 2020-10-30 | 2022-05-05 | Tractable, Inc. | Remote vehicle damage assessment |
US11989787B2 (en) * | 2020-10-30 | 2024-05-21 | Tractable Ltd | Remote vehicle damage assessment |
CN112348799A (en) * | 2020-11-11 | 2021-02-09 | 德联易控科技(北京)有限公司 | Vehicle damage assessment method and device, terminal equipment and storage medium |
US20220156530A1 (en) * | 2020-11-13 | 2022-05-19 | Salesforce.Com, Inc. | Systems and methods for interpolative centroid contrastive learning |
WO2022100454A1 (en) * | 2020-11-13 | 2022-05-19 | 深圳壹账通智能科技有限公司 | Vehicle damage assessment method, apparatus, terminal device and storage medium |
US20220156497A1 (en) * | 2020-11-17 | 2022-05-19 | Fyusion, Inc. | Multi-view visual data damage detection |
US11861900B2 (en) * | 2020-11-17 | 2024-01-02 | Fyusion, Inc. | Multi-view visual data damage detection |
US12211272B2 (en) * | 2020-11-17 | 2025-01-28 | Fyusion, Inc. | Multi-view visual data damage detection |
US20220155945A1 (en) * | 2020-11-17 | 2022-05-19 | Fyusion, Inc. | Damage detection portal |
US20240096094A1 (en) * | 2020-11-17 | 2024-03-21 | Fyusion, Inc. | Multi-view visual data damage detection |
US11488371B2 (en) | 2020-12-17 | 2022-11-01 | Concat Systems, Inc. | Machine learning artificial intelligence system for producing 360 virtual representation of an object |
US11900611B2 (en) * | 2021-01-15 | 2024-02-13 | Adobe Inc. | Generating object masks of object parts utlizing deep learning |
US11587234B2 (en) * | 2021-01-15 | 2023-02-21 | Adobe Inc. | Generating class-agnostic object masks in digital images |
US20230136913A1 (en) * | 2021-01-15 | 2023-05-04 | Adobe Inc. | Generating object masks of object parts utlizing deep learning |
US20220230321A1 (en) * | 2021-01-15 | 2022-07-21 | Adobe Inc. | Generating class-agnostic object masks in digital images |
US11972569B2 (en) | 2021-01-26 | 2024-04-30 | Adobe Inc. | Segmenting objects in digital images utilizing a multi-object segmentation model framework |
US11900633B2 (en) * | 2021-01-27 | 2024-02-13 | The Boeing Company | Vehicle pose determination systems and methods |
US20220237822A1 (en) * | 2021-01-27 | 2022-07-28 | The Boeing Company | Vehicle pose determination systems and methods |
US12235928B2 (en) | 2021-02-02 | 2025-02-25 | Inait Sa | Machine annotation of photographic images |
US11971953B2 (en) | 2021-02-02 | 2024-04-30 | Inait Sa | Machine annotation of photographic images |
WO2022169622A1 (en) * | 2021-02-04 | 2022-08-11 | Carnegie Mellon University | Soft anchor point object detection |
US12205266B2 (en) | 2021-02-05 | 2025-01-21 | Fyusion, Inc. | Multi-view interactive digital media representation viewer |
FR3119916A1 (en) * | 2021-02-17 | 2022-08-19 | Continental Automotive Gmbh | Extrinsic calibration of a camera mounted on a vehicle chassis |
US20220262083A1 (en) * | 2021-02-18 | 2022-08-18 | Inait Sa | Annotation of 3d models with signs of use visible in 2d images |
WO2022175044A1 (en) * | 2021-02-18 | 2022-08-25 | Inait Sa | Annotation of 3d models with signs of use visible in 2d images |
US11544914B2 (en) * | 2021-02-18 | 2023-01-03 | Inait Sa | Annotation of 3D models with signs of use visible in 2D images |
US11983836B2 (en) | 2021-02-18 | 2024-05-14 | Inait Sa | Annotation of 3D models with signs of use visible in 2D images |
US11605151B2 (en) | 2021-03-02 | 2023-03-14 | Fyusion, Inc. | Vehicle undercarriage imaging |
US12182964B2 (en) | 2021-03-02 | 2024-12-31 | Fyusion, Inc. | Vehicle undercarriage imaging |
US11893707B2 (en) | 2021-03-02 | 2024-02-06 | Fyusion, Inc. | Vehicle undercarriage imaging |
US11341699B1 (en) * | 2021-03-09 | 2022-05-24 | Carmax Enterprise Services, Llc | Systems and methods for synthetic image generation |
US12039647B1 (en) * | 2021-03-09 | 2024-07-16 | Carmax Enterprise Services, Llc | Systems and methods for synthetic image generation |
EP4305581A4 (en) * | 2021-03-10 | 2025-01-22 | Neural Claim System, Inc. | METHODS AND SYSTEMS FOR ISSUING AND/OR PROCESSING INSURANCE CLAIMS FOR DAMAGED MOTOR VEHICLE GLASS |
WO2022192472A1 (en) * | 2021-03-10 | 2022-09-15 | Neural Claim System, Inc. | Methods and systems for submitting and/or processing insurance claims for damaged motor vehicle glass |
GB2618738A (en) * | 2021-03-10 | 2023-11-15 | Neural Claim System Inc | Methods and systems for submitting and/or processing insurance claims for damaged motor vehicle glass |
US12159323B2 (en) * | 2021-03-24 | 2024-12-03 | Panasonic Automotive Systems Co., Ltd. | Vehicle monitoring apparatus, vehicle monitoring system, and vehicle monitoring method |
US20220309602A1 (en) * | 2021-03-24 | 2022-09-29 | Panasonic Intellectual Property Management Co., Ltd. | Vehicle monitoring apparatus, vehicle monitoring system, and vehicle monitoring method |
US20220351066A1 (en) * | 2021-04-29 | 2022-11-03 | Dell Products L.P. | System and Method for Identification of Replacement Parts Using Artificial Intelligence/Machine Learning |
US12393866B2 (en) * | 2021-04-29 | 2025-08-19 | Dell Products L.P. | System and method for identification of replacement parts using artificial intelligence/machine learning |
US20220351298A1 (en) * | 2021-04-30 | 2022-11-03 | Alan Martin | System Facilitating Government Mitigation of Damage at Public and Private Facilities |
US12315276B2 (en) * | 2021-05-10 | 2025-05-27 | Ccc Intelligent Solutions Inc. | Image processing systems for detecting types of damage to a vehicle |
US20220358757A1 (en) * | 2021-05-10 | 2022-11-10 | Ccc Intelligent Solutions Inc. | Image processing systems for detecting types of damage to a vehicle |
WO2023000737A1 (en) * | 2021-07-23 | 2023-01-26 | 明觉科技(北京)有限公司 | Vehicle accident loss assessment method and apparatus |
EP4372658A1 (en) * | 2021-07-23 | 2024-05-22 | Data Enlighten Technology (Beijing) Co., Ltd | Vehicle accident loss assessment method and apparatus |
US11983745B2 (en) | 2021-08-06 | 2024-05-14 | Capital One Services, Llc | Systems and methods for valuation of a vehicle |
US20230067659A1 (en) * | 2021-08-24 | 2023-03-02 | Ford Global Technologies, Llc | Systems and methods for detecting vehicle defects |
US20240095896A1 (en) * | 2021-08-25 | 2024-03-21 | Genpact Luxembourg S.à r.l. II | Dimension estimation using duplicate instance identification in a multiview and multiscale system |
US11875496B2 (en) * | 2021-08-25 | 2024-01-16 | Genpact Luxembourg S.à r.l. II | Dimension estimation using duplicate instance identification in a multiview and multiscale system |
US20230063002A1 (en) * | 2021-08-25 | 2023-03-02 | Genpact Luxembourg S.à r.l. II | Dimension estimation using duplicate instance identification in a multiview and multiscale system |
US12190491B2 (en) * | 2021-08-25 | 2025-01-07 | Genpact Usa, Inc. | Dimension estimation using duplicate instance identification in a multiview and multiscale system |
WO2023055794A1 (en) * | 2021-09-28 | 2023-04-06 | Watts Robert Lee | Systems and methods for repair of vehicle body damage |
US12118779B1 (en) * | 2021-09-30 | 2024-10-15 | United Services Automobile Association (Usaa) | System and method for assessing structural damage in occluded aerial images |
US12002192B2 (en) | 2021-11-16 | 2024-06-04 | Solera Holdings, Llc | Transfer of damage markers from images to 3D vehicle models for damage assessment |
WO2023091859A1 (en) * | 2021-11-16 | 2023-05-25 | Solera Holdings, Llc | Transfer of damage markers from images to 3d vehicle models for damage assessment |
CN114268621A (en) * | 2021-12-21 | 2022-04-01 | 东方数科(北京)信息技术有限公司 | Deep learning-based digital instrument meter reading method and device |
US12033218B2 (en) | 2022-01-31 | 2024-07-09 | Vehicle Service Group, Llc | Assessing damages on vehicles |
EP4224417A1 (en) * | 2022-01-31 | 2023-08-09 | Vehicle Service Group, LLC | Assessing damages on vehicles |
CN114241398A (en) * | 2022-02-23 | 2022-03-25 | 深圳壹账通科技服务有限公司 | Vehicle damage assessment method, device, equipment and storage medium based on artificial intelligence |
US12229462B2 (en) | 2022-02-28 | 2025-02-18 | Freddy Technologies Llc | System and method for automatically curating and displaying images |
US11861665B2 (en) | 2022-02-28 | 2024-01-02 | Concat Systems, Inc. | Artificial intelligence machine learning system for classifying images and producing a predetermined visual output |
WO2023205220A1 (en) * | 2022-04-19 | 2023-10-26 | Tractable Ltd | Remote vehicle inspection |
CN114817991A (en) * | 2022-05-10 | 2022-07-29 | 上海计算机软件技术开发中心 | A method and system for image desensitization of Internet of Vehicles |
US20230377047A1 (en) * | 2022-05-18 | 2023-11-23 | The Toronto-Dominion Bank | Systems and methods for automated data processing using machine learning for vehicle loss detection |
US12223549B2 (en) * | 2022-05-18 | 2025-02-11 | The Toronto-Dominion Bank | Systems and methods for automated data processing using machine learning for vehicle loss detection |
CN114842198A (en) * | 2022-05-31 | 2022-08-02 | 平安科技(深圳)有限公司 | Intelligent loss assessment method, device and equipment for vehicle and storage medium |
US20230410271A1 (en) * | 2022-06-16 | 2023-12-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vehicle assessment systems and methods |
US12272122B1 (en) * | 2022-08-11 | 2025-04-08 | Amazon Technologies, Inc. | Techniques for optimizing object detection frameworks |
US20240087330A1 (en) * | 2022-09-12 | 2024-03-14 | Mitchell International, Inc. | System and method for automatically identifying vehicle panels requiring paint blending |
US20240104709A1 (en) * | 2022-09-27 | 2024-03-28 | Hyundai Mobis Co., Ltd. | Method and system for providing vehicle exterior damage determination service |
US12400273B2 (en) | 2023-03-15 | 2025-08-26 | Neural Claim System, Inc. | Methods and systems for submitting and/or processing insurance claims for damaged motor vehicle glass |
CN115965448A (en) * | 2023-03-16 | 2023-04-14 | 邦邦汽车销售服务(北京)有限公司 | Vehicle maintenance accessory recommendation method and system based on image processing |
US20240378809A1 (en) * | 2023-05-12 | 2024-11-14 | Adobe Inc. | Digital image decaling |
EP4492338A1 (en) * | 2023-07-10 | 2025-01-15 | Cambridge Mobile Telematics Inc. | Automating vehicle damage inspection using claims photos |
EP4539485A1 (en) * | 2023-10-12 | 2025-04-16 | ACV Auctions Inc. | Systems and methods for guiding the capture of vehicle images and videos |
US12361538B1 (en) * | 2024-04-07 | 2025-07-15 | Uveye Ltd. | Detection and estimation of defects in vehicle's exterior |
RU2838349C1 (en) * | 2024-10-30 | 2025-04-14 | Общество с ограниченной ответственностью "Тетрон" | Method for automated pre-trip technical inspection of vehicles |
Also Published As
Publication number | Publication date |
---|---|
US11144889B2 (en) | 2021-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11144889B2 (en) | Automatic assessment of damage and repair costs in vehicles | |
US11443288B2 (en) | Automatic assessment of damage and repair costs in vehicles | |
EP3844669B1 (en) | Method and system for facilitating recognition of vehicle parts based on a neural network | |
US20230316702A1 (en) | Explainable artificial intelligence (ai) based image analytic, automatic damage detection and estimation system | |
CN108171112B (en) | Vehicle Recognition and Tracking Method Based on Convolutional Neural Network | |
Hoang et al. | Enhanced detection and recognition of road markings based on adaptive region of interest and deep learning | |
Sharma et al. | Vehicle identification using modified region based convolution network for intelligent transportation system | |
CA3136674C (en) | Methods and systems for crack detection using a fully convolutional network | |
CN117095180B (en) | Embryo development stage prediction and quality assessment method based on stage identification | |
Xing et al. | Traffic sign recognition using guided image filtering | |
US20230095533A1 (en) | Enriched and discriminative convolutional neural network features for pedestrian re-identification and trajectory modeling | |
Do et al. | Automatic license plate recognition using mobile device | |
Qaddour et al. | Automatic damaged vehicle estimator using enhanced deep learning algorithm | |
Xing et al. | The improved framework for traffic sign recognition using guided image filtering | |
US12327397B2 (en) | Electronic device and method with machine learning training | |
CN119540941A (en) | A three-dimensional target detection method based on stereo vision technology | |
US20250078514A1 (en) | Ai based inventory control system | |
Kaimkhani et al. | UAV with vision to recognise vehicle number plates | |
Zhang et al. | Visual extraction system for insulators on power transmission lines from UAV photographs using support vector machine and color models | |
Diniz et al. | Enhancement of accuracy level in parking space identification by using machine learning algorithms | |
Ström et al. | Extracting regions of interest and detecting outliers from image data | |
Gunasundari et al. | High Dimensionality Reduction and Transfer Learning Based Ensemble Approach for Effective Car Damage Detection | |
Ong et al. | Vehicle Classification Using Neural Networks and Image Processing | |
Ramit et al. | Performance Evaluation of YOLO Models for Detecting Bangladeshi License Plates | |
van Ruitenbeek et al. | Vehicle Damage Detection using Deep Convolutional Neural Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: AMERICAN INTERNATIONAL GROUP, INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, KAIGANG;AITHAL, ASHWATH;DALAL, SIDDHARTHA;AND OTHERS;SIGNING DATES FROM 20180503 TO 20180507;REEL/FRAME:045746/0570 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |