WO2020051545A1 - Method and computer-readable storage medium for generating training samples for training a target detector - Google Patents

Method and computer-readable storage medium for generating training samples for training a target detector Download PDF

Info

Publication number
WO2020051545A1
WO2020051545A1 PCT/US2019/050082 US2019050082W WO2020051545A1 WO 2020051545 A1 WO2020051545 A1 WO 2020051545A1 US 2019050082 W US2019050082 W US 2019050082W WO 2020051545 A1 WO2020051545 A1 WO 2020051545A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
region
image
bounding box
region proposal
Prior art date
Application number
PCT/US2019/050082
Other languages
French (fr)
Other versions
WO2020051545A9 (en
Inventor
Juan Xu
Original Assignee
Alibaba Group Holding Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201811046061.6A external-priority patent/CN110569699B/en
Application filed by Alibaba Group Holding Limited filed Critical Alibaba Group Holding Limited
Priority to EP19773612.7A priority Critical patent/EP3847579A1/en
Priority to SG11202012526SA priority patent/SG11202012526SA/en
Publication of WO2020051545A1 publication Critical patent/WO2020051545A1/en
Publication of WO2020051545A9 publication Critical patent/WO2020051545A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/759Region-based matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Definitions

  • This disclosure is generally related to the field of artificial intelligence. More specifically, this disclosure is related to a system and method for facilitating efficient indexing in a database system.
  • a vehicle insurance company may send a professional claim adjuster to the location of a damaged vehicle to conduct a manual survey and damage assessment.
  • the survey and damage assessment conducted by the adjuster can include determining a repair solution, estimating an indemnity, taking photographs of the vehicle, and archiving the photographs for subsequent assessment of the damage by a damage inspector at the vehicle insurance company. Since the survey and subsequent damage assessment are performed manually, an insurance claim may require a significant number of days to resolve. Such delays in the processing time can lead to poor user experience with the vehicle insurance company.
  • the manual assessments may also incur a large cost (e.g., labor, training, licensing, etc.).
  • AI artificial intelligence
  • an Al-based assessment technique can be used for automatic identification of the damage of the vehicle (e.g., the parts of the vehicle).
  • a user can capture a set of images of the vehicle depicting the damages from the user’s location, such as the user’s home or work, and send the images to the insurance company (e.g., using an app or a web interface).
  • These images can be used by an AI model to identify the damage on the vehicle.
  • the automated assessment process may reduce the labor costs for a vehicle insurance company and improve user experience associated the claim processing.
  • Embodiments described herein provide a system for facilitating image sampling for training a target detector.
  • the system obtains a first image depicting a first target.
  • the continuous part of the first target in the first image is labeled and enclosed in a target bounding box.
  • the system then generates a set of positive image samples from an area of the first image enclosed by the target bounding box.
  • a respective positive image sample includes at least a part of the first target.
  • the system can train the target detector with the set of positive image samples to detect a second target from a second image.
  • the target detector can be an artificial intelligence (AI) model capable of detecting an object.
  • AI artificial intelligence
  • the first and second targets indicate a first and second vehicular damages, respectively.
  • the label of the continuous part indicate a material impacted by the first vehicular damage.
  • the system detects the second target by detecting the second vehicular damage based on a corresponding material independent of identifying a part of a vehicle impacted by the second vehicular damage.
  • the system generates the set of positive image samples by determining a region proposal in the area of the first image enclosed by the target bounding box and selecting the region proposal as a positive sample if an overlapping parameter of the region proposal is in a threshold range.
  • the overlapping parameter is a ratio of an overlapping region and a surrounding region of the region proposal.
  • the overlapping region indicates a common region covered by both the region proposal and a set of internal bounding boxes within the target bounding box.
  • a respective internal bounding box can include at least a part of the continuous region.
  • the surrounding region indicates a total region covered by the region proposal and the set of internal bounding boxes
  • the system selects the set of internal bounding boxes based on one of an intersection with the region proposal, a distance from the region proposal, and a total number of internal bounding boxes in the target bounding box.
  • the system generates a negative sample, which excludes any part of the first target, from the first image. To do so, the system can select the region proposal as the negative sample in response to determining that the overlapping parameter of the region proposal is in a low threshold range. The system may also select an area outside of the target bounding box in the first image as the negative sample
  • the system determines a set of subsequent region proposals in the area of the first image enclosed by the target bounding box. To do so, the system can apply a movement rule to a previous region proposal and terminate based on a termination condition.
  • the system generates a second set of positive image samples. To do so, the system can select a positive image sample from a region proposal in a second target bounding box in the first image. The system may also change the size or shape of a bounding box of a region proposal of a previous round.
  • the system optimizes the training of the target detector by generating a plurality of bounding boxes for a plurality of image samples in the set of positive image samples and combining the plurality of bounding boxes to generate a combined bounding box and a corresponding label.
  • a respective bounding box identifies the corresponding part of the continuous region.
  • FIG. 1A illustrates exemplary infrastructure and environment facilitating an efficient assessment system, in accordance with an embodiment of the present application.
  • FIG. 1B illustrates exemplary training and operation of an efficient assessment system, in accordance with an embodiment of the present application.
  • FIG. 2 illustrates exemplary bounding boxes for generating image samples for training a target detection system of an efficient assessment system, in accordance with an embodiment of the present application.
  • FIG. 3A illustrates an exemplary region proposal generation process for generating image samples, in accordance with an embodiment of the present application.
  • FIG. 3B illustrates an exemplary assessment of a region proposal for generating image samples, in accordance with an embodiment of the present application.
  • FIG. 3C illustrates an exemplary determination of whether a region proposal can be an image sample, in accordance with an embodiment of the present application.
  • FIG. 4 illustrates an exemplary integration of detection results of multiple samples, in accordance with an embodiment of the present application.
  • FIG. 5A presents a flowchart illustrating a method of an assessment system performing a damage assessment, in accordance with an embodiment of the present application.
  • FIG. 5B presents a flowchart illustrating a method of an assessment system generating image samples for training a target detection system, in accordance with an embodiment of the present application.
  • FIG. 5C presents a flowchart illustrating a method of an assessment system integrating detection results of multiple samples, in accordance with an embodiment of the present application.
  • FIG. 6 illustrates an exemplary computer system that facilitates an efficient assessment system, in accordance with an embodiment of the present application.
  • FIG. 7 illustrates an exemplary apparatus that facilitates an efficient assessment system, in accordance with an embodiment of the present application.
  • the embodiments described herein solve the problem of efficiently detecting a damage of a vehicle by (i) generating positive and negative image samples from a labeled image for training a target detection system; and (ii) integrating detection results of multiple image samples associated with a damage to increase the efficiency of the target detection system.
  • an assessment system can use the target detection system to identify damages on a vehicle independent of the damaged parts and generate a repair plan based on the identification.
  • an Al-based technique for determining vehicular damages from an image may include determining the damaged parts of a vehicle and the degree of the damages based on the similar images in historical image data.
  • Another technique may involve identifying the area of a damaged part in the center of an input image through an identification method and comparing the area of the part with the historical image data to obtain a similar image. By comparing the obtained image with the input image, the technique may determine the degree of damages.
  • these techniques are prone to interference from the additional information of the damaged part in the input image, a reflection of light, contaminants, etc. As a result, these techniques may operate with low accuracy while determining the degree of damages.
  • the technique typically needs to be trained with a certain number of positive samples and negative samples.
  • a certain number of images depicting the damages need to serve as the positive samples, and a certain number of images not depicting the damages need to serve as the negative samples.
  • obtaining positive samples in sufficient numbers can be challenging.
  • a negative sample may include at least a segment of a damaged part and cause interference in the training process.
  • the AI-based model may not be equipped to detect damages on a part of a vehicle, especially if the model has not been trained with similar damages on the part of the vehicle.
  • an assessment system that can identify a damaged area of a vehicle (i.e., a target) from one or more images of the vehicle and assess the degree of damage on the identified damaged area.
  • the system can assess damage to a vehicle in two dimensions.
  • the system can identify a part of the vehicle based on object detection in one dimension and determine the damage in another dimension.
  • the system can identify the damaged area based on the material on which the damage has been inflicted.
  • the system can execute the damage detection independent of the underlying vehicle part. This allows the system to efficiently detect a damaged area on a vehicle without relying on how that damaged area may appear on a specific part of the vehicle.
  • the system can identify damages and the degree of damages on materials, such as the paint surface, plastic components, frosted components, glasses, lights, mirrors, etc., without requiring information of the underlying parts.
  • the system can also be used for the identification of damages on similar materials in other scenarios (i.e., other than the damages on a vehicle).
  • the system can independently identify one or more parts that may represent the damaged area. In this way, the system can identify the damaged area and the degree of damages, and the parts that construct the damaged area.
  • the system then performs a damage assessment, determines a repair plan, and generates a cost estimate. For example, the system can estimate the cost and/or availability of the parts, determine whether a repair or replacement is needed based the degree of damage, determine the deductibles and fees, and schedule a repair operation based on calendar information of a repair shop.
  • the target detector e.g., a deep-leaming network
  • the system can also generate image samples from labeled images (e.g., images with labeled targets).
  • a labeled image may at least include a target bounding box that can be hand-labeled in advance and a plurality of internal bounding boxes in the target bounding box.
  • the target bounding box is used for surrounding a continuous region of a target (e.g., the largest continuous region of damage), and each of the plurality of internal bounding boxes surrounds a segment of the continuous region of the target.
  • the system can obtain the labeled images and determine region proposals for sampling in the target bounding box.
  • the region proposal can be represented based on a pre-determined bounding box (e.g., with predetermined size and shape). This bounding box can be placed in the target bounding box based on a sliding window or an image segmentation algorithm.
  • the system compares the region proposal with the corresponding internal bounding boxes to determine overlapping parameters.
  • the system may collect the region proposal as a positive sample for training the target detector. Otherwise, if the overlapping parameters are below a low threshold range, the system may collect the region proposal as a negative sample. In addition, the system can also collect negative samples from outside of the target bounding box to ensure that the negative sample does not include any damage information. In this way, the system can reduce interference and improve the accuracy of the target detector.
  • FIG. 1A illustrates exemplary infrastructure and environment facilitating an efficient assessment system, in accordance with an embodiment of the present application.
  • an infrastructure 100 can include an automated assessment environment 110.
  • Environment 110 can facilitate automated damage assessment in a distributed environment. Environment 110 can serve a client device 102 using an assessment server 130.
  • Server 130 can communicate with client device 102 via a network 120 (e.g., a local or a wide area network, such as the Internet).
  • Server 130 can include components such as a number of central processing unit (CPU) cores, a system memory (e.g., a dual in-line memory module), a network interface card (NIC), and a number of storage devices/disks.
  • Server 130 can run a database system (e.g., a database management system (DBMS)) for maintaining database instances.
  • DBMS database management system
  • user 104 may use client device 102 to capture an image 122 depicting damage 124. User 104 can then send an insurance claim 132 comprising image 122 as an input image from client device 102 via network 120.
  • the AI-based technique may determine the parts damaged by damage 124 and the degree of damage 124 based on the similar images in historical image data.
  • Another technique may involve identifying the area of damage 124 in the center of input image 122 through an identification method and comparing the area of damage 124 with the historical image data to obtain a similar image. By comparing the obtained image with input image 122, the technique may determine the degree of damages.
  • these techniques are prone to interference from the additional information in input image 122, such as undamaged segments, a reflection of light, contaminants, etc. As a result, these techniques may operate with low accuracy while determining the degree of damages. Furthermore, these techniques typically need to be trained with a certain number of positive samples and negative samples. However, obtaining positive samples in sufficient numbers can be challenging. Furthermore, a negative sample may include interfering elements. As a result, the AI-based technique may not be equipped to detect damage 124, especially if the technique has not been trained with damages similar to damage 124.
  • an automated assessment system 150 can efficiently and accurately identify the area and vehicle parts impacted by damage 124 (i.e., one or more targets) from image 122, and assess the degree of damage 124.
  • System 150 can run on server 130 and communicate with client device 102 via network 120.
  • system 150 includes a target detector 160 that can assess damage 124 in two dimensions.
  • Target detector 160 can identify a part of the vehicle impacted by damage 124 in one dimension and determine damage 124 in another dimension.
  • target detector 160 can apply geometric calculation and division to determine the degree of damage 124 as target 126 that can include the location of damage 124, the parts impacted by damage 124, and the degree of damage 124.
  • target detector 160 can identify the area or location of damage 124 based on the material on which the damage has been inflicted. As a result, target detector 160 can execute the damage detection independent of the underlying vehicle part. This allows target detector 160 to efficiently detect the area or location of damage 124 without relying on how that damaged area may appear on a specific part of the vehicle. In other words, target detector 160 can identify damage 124 and the degree of damage 124 on the material on which damage 124 appears without requiring information of the underlying parts. In addition, target detector 160 can independently identify one or more parts that may be impacted by damage 124. In this way, target detector 160 can identify the area and the degree of damage 124, and the parts impacted by damage 124.
  • system 150 Based on the damage information generated by target detector 160, system 150 then generates a damage assessment 134 to determine a repair plan and generate a cost estimate for user 102.
  • System 150 can estimate the cost and/or availability of the parts impacted by damage 124, determine whether a repair or replacement is needed based the degree of damage 124, and schedule a repair operation based on calendar information of a repair shop.
  • System 150 can then send assessment 134 to client device 102 via network 120.
  • target detector 160 examples include, but are not limited to, Faster Region- Convolutional Neural Network (R-CNN), You Only Look Once (YOLO), Single Shot MultiBox Detector (SSD), R-CNN, Lighthead R-CNN, and RetinaNet.
  • R-CNN Faster Region- Convolutional Neural Network
  • YOLO You Only Look Once
  • SSD Single Shot MultiBox Detector
  • R-CNN Lighthead R-CNN
  • RetinaNet RetinaNet.
  • target detector 160 can reside on client device 102.
  • Target detector 160 can then use a mobile end target detection technique, such as MobileNet+SSD.
  • FIG. 1B illustrates exemplary training and operation of an efficient assessment system, in accordance with an embodiment of the present application.
  • Target detector 160 can operate with high accuracy if target detector 160 is trained with sufficient number of positive and negative samples.
  • system 150 can also generate image samples 170 from a labeled image 172, which can be an image with labeled targets. It should be noted that the image sampling and target detection can be executed on the same or different devices.
  • Labeled image 172 may at least include a target bounding box 180 that can be hand-labeled in advance and a plurality of internal bounding boxes 182 and 184 in target bounding box 180.
  • Target bounding box 180 is used for surrounding a continuous region of a target (e.g., the largest continuous region of damage), and each of internal bounding boxes 182 and 184 surrounds a segment of the continuous region of the target.
  • the labeling of on image 172 can indicate a damage definition, which can include the area and the class, associated with a respective continuous damage segment depicted in image 172.
  • the various degrees of damages corresponding to the various materials are defined as the damage classes. For example, if the material is glass, the damage class can include minor scratches, major scratches, glass cracks, etc. The smallest area that includes a continuous segment of the damage is defined as the area of the damage segment. Therefore, for each continuous segment of the damage in image 172, the labeling can indicate the area and the class of the damage segment.
  • the labeling can indicate the damaged definition of the corresponding damage segment. With such labeling of a damage segment, the damage becomes related only to the material and not to a specific part.
  • system 150 can obtain labeled image 172 and determine region proposals for sampling in target bounding box 180.
  • the region proposal can be represented based on a pre-determined bounding box (e.g., with predetermined size and shape).
  • the bounding box of the region proposal can be placed in target bounding box 180 based on a sliding window or an image segmentation algorithm.
  • System 150 compares the region proposal with the corresponding internal bounding boxes to determine overlapping parameters (e.g., an intersection over union (IoU)).
  • System 150 may only compare the region proposal with the internal bounding boxes that are within a distance threshold of the region proposal (e.g., 50 pixels) or have an intersection with the region proposal.
  • system 150 may collect the region proposal as a positive sample for training target detector 160. Otherwise, if the overlapping parameters are below a low threshold range (e.g., less than 0.1), system 150 may collect the region proposal as a negative sample. In addition, system 150 can also collect negative samples from outside of target bounding box 180 to ensure that the negative sample does not include any damage information.
  • a threshold range e.g., greater than 0.7 or falls within 0.7-0.99
  • system 150 can generate image samples 170 that can include accurate positive and negative samples.
  • system 150 can reduce the interference and improve the accuracy of target detector 160, thereby allowing target detector 160 to accurately detect target 124 from input image 122.
  • FIG. 2 illustrates exemplary bounding boxes for generating image samples for training a target detection system of an efficient assessment system, in accordance with an embodiment of the present application.
  • system 150 can receive an input image 200 for image sampling.
  • Image 200 may depict a vehicular damage 220 (i.e., damage on a vehicle).
  • a user 250 may label image 200 with one or more bounding boxes.
  • Each of the bounding boxes can correspond to a label that indicates a damage definition (i.e., the area of the damage and the class of damage).
  • the bounding boxes include at least one target bounding box and may include a set of internal bounding boxes located in the target bounding box.
  • User 250 may determine the largest continuous region of damage 220 and apply a target bounding box 202 on the largest continuous region.
  • a target bounding box surrounds a continuous region of damage.
  • User 250 may start from the largest continuous region for determining a target bounding box, and continue with the next largest continuous region for a subsequent target bounding box in the same image.
  • User 250 can then select a part of the continuous region in bounding box 202 with an internal bounding box 204. In the same way, user 250 can select internal bounding boxes 206 and 208 in bounding box 202.
  • a bounding box typically takes a square or rectangular shape
  • the bounding box may take any other shape, such a triangular or oval shape.
  • shapes and sizes of internal bounding boxes 204, 206, and 208 may take the same or different forms.
  • two adjacent bounding boxes may or may not be joined and/or overlapping.
  • the internal bounding boxes within target bounding box 202 may or may not cover the continuous region of damage in its entirety.
  • system 150 can then determine region proposals for sampling in target bounding box 202.
  • the region proposal can be placed in target bounding box 202 based on a sliding window or an image segmentation algorithm.
  • system 150 may determine a region proposal 212 in a region covered by a portion of damage 220.
  • System 150 compares region proposal 212 with corresponding internal bounding boxes 204 and 206 to determine overlapping parameters. Based on whether the overlapping parameters are in a threshold range, system 150 may collect region proposal 212 as a positive sample.
  • system 150 may collect region proposal 212 as a negative sample.
  • system 150 can also determine a region proposal 214 in a region of target bounding box 202 that may not include damage 220 (e.g., using segmentation).
  • System 150 can also determine a region proposal 216 outside of target bounding box 202 to collect a negative sample. In this way, system 150 can use region proposals 214 and 216 for negative samples, thereby ensuring that the corresponding negative samples do not include any damage information.
  • FIG. 3A illustrates an exemplary region proposal generation process for generating image samples, in accordance with an embodiment of the present application.
  • System 150 may determine a set of movement rules that determines the placement of a region proposal in target bounding box 202.
  • System 150 can also receive the movement rules as input.
  • the movement rules can be defined so that, from a current region proposal, a subsequent region proposal can be determined within the region enclosed by target bounding box 202.
  • Such rules can include an initial position of a region proposal (i.e., the position of a bounding box corresponding to the region proposal), the deviation distance from a previous position for a movement, and a movement direction, and a movement termination condition.
  • the movement termination condition can be based on one or more of: a number of region proposals and/or movements in a target bounding box and the region covered by the region proposals (e.g., a threshold region).
  • system 150 can determine a number of region proposals in target bounding box 202.
  • the upper left corner of target bounding box 202 is selected as the position for of the initial region proposal 302.
  • the next region proposal 304 can be selected based on a movement from left to right along the left-to-right width of target bounding box 202.
  • a predetermined step length can dictate how far region proposal 304 should be from region proposal 302. In this way, a sample can be generated for each movement.
  • the position of a region proposal in target bounding box 202 can be randomly selected. To do so, system 150 can randomly determine a reference point of region proposal 302 (e.g., a center or corner point of region proposal 302). The position of the reference point can be selected based on a movement range of the reference point (e.g., a certain distance between the reference point and the boundary of target bounding box 202 should be maintained). System 150 can then place region proposal 302 in target bounding box 202 based on a predetermined size of a region proposal with respect to the reference point.
  • a reference point of region proposal 302 e.g., a center or corner point of region proposal 302
  • the position of the reference point can be selected based on a movement range of the reference point (e.g., a certain distance between the reference point and the boundary of target bounding box 202 should be maintained).
  • System 150 can then place region proposal 302 in target bounding box 202 based on a predetermined size of a region proposal with respect to the reference point.
  • FIG. 3B illustrates an exemplary assessment of a region proposal for generating image samples, in accordance with an embodiment of the present application.
  • system 150 has determined a region proposal 310 in target bounding box 202.
  • system 150 determines the overlapping parameters that indicate the degree and/or proportion of overlap between region proposal 310 and the region enclosed by a respective internal bounding box in target bounding box 202. Since parts of the continuous region of damage 220 are represented by the internal bounding boxes, a high degree of overlap between region proposal 310 and the internal bounding boxes indicates that region proposal 310 includes a significant portion of damage 220. Based on this assessment, system 150 can select region proposal 310 as a positive sample.
  • system 150 may compare region proposal 310 with a respective internal bounding box of target bounding box 202. This comparison can be executed based on an arrangement order of the internal bounding boxes, or based on the distance to region proposal 310 (e.g., from near to far). In some embodiments, system 150 may compare region proposal 310 only with the internal bounding boxes in the vicinity of region proposal 310. For example, system 150 can compare region proposal 310 only with the internal bounding boxes that are within a predetermined threshold distance (e.g., within a 50-pixel distance) or, have an intersection with region proposal 310 (e.g., internal bounding boxes 204, 206, and 208).
  • a predetermined threshold distance e.g., within a 50-pixel distance
  • System 150 can determine whether region proposal 310 can be an image sample based on the overlapping parameters of region proposal 310 and a surrounding region.
  • the surrounding region includes the total region enclosed by region proposal 310 and an internal bounding box that has been compared with region proposal 310. For example, if system 150 has compared region proposal 310 with internal bounding boxes 204, the surrounding region for region proposal 310 can be the region enclosed by internal bounding box 204 and region proposal 310.
  • System 150 can then determine the overlapping parameters of region proposal 310 with respect to the internal region, and determine whether region proposal 310 can be an image sample.
  • the overlapping parameters can indicate whether there is an overlapping, the overlapping degree, and the overlapping proportion.
  • FIG. 3C illustrates an exemplary determination of whether a region proposal can be an image sample, in accordance with an embodiment of the present application.
  • system 150 determines whether a region proposal 354 can be selected as an image sample.
  • System 150 can determine the surrounding region 356 (denoted with a gray grid) covered by internal bounding box 352 and region proposal 354.
  • System 150 determines the overlapping region 358 (denoted with a dark line) between region proposal 354 and internal bounding box 352.
  • System 150 can then determine the overlapping parameters for internal bounding box 352 and region proposal 354 as a ratio of overlapping region 358 and surrounding region 356.
  • system 150 may determine the overlapping parameters for internal bounding box 352 and region proposal 354 as a ratio of overlapping region 358 and the region enclosed by internal bounding box 352.
  • a predetermined threshold e.g., 0.7
  • a threshold range e.g., 0.7-0.99
  • system 150 can generate a set of overlapping parameters.
  • System 150 can select the region proposal as a positive sample if the largest value of the set of overlapping parameters is larger than a threshold or falls within a threshold range.
  • the size and shape of the bounding box of a region proposal may be adjusted to obtain another bounding box.
  • System 150 can then perform another round of sampling using the new bounding box. If there is another target bounding box in the image, system 150 can use the same method for image sampling in that target bounding box. In this way, system 150 can perform image sampling for each target bounding box in an image.
  • Each target bounding box may allow system 150 to determine a plurality of region proposals for sampling. For each region proposal, system 150 can determine whether to select the region proposal as a positive sample for training a target detector.
  • system 150 may further screen the region proposal as a potential negative sample. For example, system 150 may select the region proposal as a negative sample if the ratio of the region proposal for each corresponding internal bounding box is below a low threshold range (e.g., 0.1). In other words, if the comparison results of a region proposal and the regions enclosed by all corresponding internal bounding boxes meet the condition for a negative sample, system 150 can select the region proposals as a negative sample. Furthermore, system 150 can place a region proposal outside of any target bounding box to collect a negative sample. Since each continuous damage region is covered by a
  • a region proposal outside of any target bounding box can be selected as a negative sample. In this way, system 150 can reduce noise interference in a negative sample.
  • FIG. 4 illustrates an exemplary integration of detection results of multiple samples, in accordance with an embodiment of the present application.
  • system 150 has obtained positive samples 402, 404, and 406 from an input image 400.
  • System 150 can represent samples 402, 404, and 406 as inputs 412, 414, and 416, respectively, for target detector 160 of system 150.
  • System 150 can use target detector 160 to generate corresponding outputs 422, 424, and 426.
  • Each of these outputs can include a characteristic description of the corresponding input.
  • the characteristic description can include a feature vector, a label (e.g., based on the damage class), and a corresponding bounding box.
  • System 150 can then construct a splice of outputs 422, 424, and 426 in the bounding box dimension. For example, system 150 can perform a concatenation 432 of the bounding boxes to improve the accuracy of target detector 160.
  • Target detector 160 can be further trained and optimized based on a Gradient Boosted Decision Trees (GBDT) model 434.
  • GBDT model 434 can optimize the concatenated bounding boxed and generate a corresponding target indicator 436, which can include an optimized bounding box and a corresponding label, for the damage depicted in samples 402, 404, and 406. In this way, the efficiency and accuracy of target detector 160 can be further improved. Operations
  • FIG. 5A presents a flowchart 500 illustrating a method of an assessment system performing a damage assessment, in accordance with an embodiment of the present application.
  • the system receives an input image indicating damage on a vehicle (operation 502) and performs target detection on the input image (operation 504).
  • the system determines the damage information (e.g., the degree of the damage and the parts impacted by the damage) based on the target detection (operation 506).
  • the system assesses the damage based on the damage information (e.g., whether the parts can be repaired or would need replacement) (operation 508). Subsequently, the system determines a repair plan and cost estimate based on the damage assessment and insurance information (operation 510).
  • FIG. 5B presents a flowchart 530 illustrating a method of an assessment system generating image samples for training a target detection system, in accordance with an embodiment of the present application.
  • the system obtains an image for sampling (operation 532) and retrieves a target bounding box and a set of internal bounding boxes in the target bounding box in the obtained image (operation 534).
  • the system determines a region proposal in the target bounding box based on a set of movement criteria (operation 536) and determines overlapping parameters (e.g., IoUs) associated with the region proposal and the corresponding internal bounding boxes (operation 538).
  • overlapping parameters e.g., IoUs
  • the system determines whether the overlapping parameters are in the threshold range (operation 540). If the overlapping parameters are in the threshold range, the system can select the region proposal as a positive sample (operation 542). If the overlapping parameters are not in the threshold range, the system determines whether the overlapping parameters are below a low threshold range (operation 544). If the overlapping parameters are below a low threshold range, the system can select the region proposal as a negative sample (operation 546).
  • the system determines whether the target bounding box has been fully sampled (i.e., the set of movement criteria has met a termination condition) (operation 548). If the target bounding box has not been fully sampled, the system determines another region proposal in the target bounding box based on the set of movement criteria (operation 536). If the target bounding box has been fully sampled, the system determines whether the input image has been fully sampled (i.e., all target bounding boxes have been sampled) (operation 550).
  • FIG. 5C presents a flowchart 560 illustrating a method of an assessment system integrating detection results of multiple samples, in accordance with an embodiment of the present application.
  • the system obtains a training image indicating a damaged area of a vehicle (i.e., a vehicular damage) (operation 562) and a sample associated with the damage in the image (operation 564).
  • the system then generates corresponding output comprising features (e.g., a feature vector), a bounding box, and a corresponding label using an AI model (e.g., the target detector) (operation 566).
  • the system then checks whether all samples are iterated (operation 570).
  • the system continues to determine another sample associated with the damage in the image (operation 564). If all samples have been iterated, the system performs bounding box concatenation based on the generated outputs to generate a characteristic description of the damage (operation 572). The system then trains and optimizes using a GBDT model (operation 574) and obtains a bounding box and a corresponding label representing the damage based on the training and optimization (operation 576).
  • FIG. 6 illustrates an exemplary computer system that facilitates an efficient assessment system, in accordance with an embodiment of the present application.
  • Computer system 600 includes a processor 602, a memory device 604, and a storage device 608.
  • Memory device 604 can include volatile memory (e.g., a dual in-line memory module (DIMM)).
  • DIMM dual in-line memory module
  • Computer system 600 can be coupled to a display device 610, a keyboard 612, and a pointing device 614.
  • Storage device 608 can be a hard disk drive (HDD) or a solid-state drive (SSD).
  • Storage device 608 can store an operating system 616, a damage assessment system 618, and data 636. Damage assessment system 618 can facilitate the operations of system 150.
  • Damage assessment system 618 can include instructions, which when executed by computer system 600 can cause computer system 600 to perform methods and/or processes described in this disclosure. Specifically, damage assessment system 618 can include instructions for generating region proposals in a target bounding box of an input image (region proposal module 620). Damage assessment system 618 can also include instructions for calculating overlapping parameters for the region proposal (parameter module 622).
  • damage assessment system 618 includes instructions for determining whether a region proposal can be a positive or a negative sample based on corresponding thresholds (sampling module 624).
  • Damage assessment system 618 can also include instructions for training and optimizing using a GBDT model (response module 626). Moreover, damage assessment system 618 includes instructions for assessing a damaged area (i.e., a target) of a vehicle from an input image and generating a repair plan (planning module 628). Damage assessment system 618 may further include instructions for sending and receiving messages (communication module 630). Data 636 can include any data that can facilitate the operations of damage assessment system 618, such as labeled images and generated samples.
  • FIG. 7 illustrates an exemplary apparatus that facilitates an efficient assessment system, in accordance with an embodiment of the present application.
  • Damage assessment apparatus 700 can comprise a plurality of units or apparatuses which may communicate with one another via a wired, wireless, quantum light, or electrical communication channel.
  • Apparatus 700 may be realized using one or more integrated circuits, and may include fewer or more units or apparatuses than those shown in FIG. 7. Further, apparatus 700 may be integrated in a computer system, or realized as a separate device that is capable of communicating with other computer systems and/or devices.
  • apparatus 700 can include units 702-712, which perform functions or operations similar to modules 620-630 of computer system 600 of FIG. 6, including: a region proposal unit 702; a parameter unit 704; a sampling unit 706; an optimization unit 708; a planning unit 710; and a communication unit 712.
  • the data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system.
  • the computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disks, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.
  • the methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above.
  • a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium.
  • the methods and processes described above can be included in hardware modules.
  • the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate arrays
  • the hardware modules When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.

Abstract

Embodiments described herein provide a system for facilitating image sampling for training a target detector. During operation, the system obtains a first image depicting a first target. Here, the continuous part of the first target in the first image is labeled and enclosed in a target bounding box. The system then generates a set of positive image samples from an area of the first image enclosed by the target bounding box. A respective positive image sample includes at least a part of the first target. The system can train the target detector with the set of positive image samples to detect a second target from a second image. The target detector can be an artificial intelligence (AI) model capable of detecting an object.

Description

METHOD AND COMPUTER-READABLE STORAGE MEDIUM FOR GENERATING
TRAINING SAMPLES FOR TRAINING A TARGET DETECTOR
Inventor: Juan Xu
BACKGROUND
Field
[0001] This disclosure is generally related to the field of artificial intelligence. More specifically, this disclosure is related to a system and method for facilitating efficient indexing in a database system.
Related Art
[0002] In conventional a damage assessment technique, a vehicle insurance company may send a professional claim adjuster to the location of a damaged vehicle to conduct a manual survey and damage assessment. The survey and damage assessment conducted by the adjuster can include determining a repair solution, estimating an indemnity, taking photographs of the vehicle, and archiving the photographs for subsequent assessment of the damage by a damage inspector at the vehicle insurance company. Since the survey and subsequent damage assessment are performed manually, an insurance claim may require a significant number of days to resolve. Such delays in the processing time can lead to poor user experience with the vehicle insurance company. Furthermore, the manual assessments may also incur a large cost (e.g., labor, training, licensing, etc.).
[0003] To address this issue, some vehicle insurance companies use image-based artificial intelligence (AI) models (e.g., machine-learning-based techniques) for assessing vehicle damages. Since the AI models may automatically detect the damages on a vehicle based on images, the automated assessment technique can shorten the wait time and reduce labor costs.
For example, an Al-based assessment technique can be used for automatic identification of the damage of the vehicle (e.g., the parts of the vehicle). Typically, a user can capture a set of images of the vehicle depicting the damages from the user’s location, such as the user’s home or work, and send the images to the insurance company (e.g., using an app or a web interface).
These images can be used by an AI model to identify the damage on the vehicle. In this way, the automated assessment process may reduce the labor costs for a vehicle insurance company and improve user experience associated the claim processing.
[0004] Even though automation has brought many desirable features to a damage assessment system, many problems remain unsolved in universal damage detection (e.g., independent of the damaged parts).
SUMMARY
[0005] Embodiments described herein provide a system for facilitating image sampling for training a target detector. During operation, the system obtains a first image depicting a first target. Here, the continuous part of the first target in the first image is labeled and enclosed in a target bounding box. The system then generates a set of positive image samples from an area of the first image enclosed by the target bounding box. A respective positive image sample includes at least a part of the first target. The system can train the target detector with the set of positive image samples to detect a second target from a second image. The target detector can be an artificial intelligence (AI) model capable of detecting an object.
[0006] In a variation on this embodiment, the first and second targets indicate a first and second vehicular damages, respectively. The label of the continuous part indicate a material impacted by the first vehicular damage.
[0007] In a further variation, the system detects the second target by detecting the second vehicular damage based on a corresponding material independent of identifying a part of a vehicle impacted by the second vehicular damage.
[0008] In a variation on this embodiment, the system generates the set of positive image samples by determining a region proposal in the area of the first image enclosed by the target bounding box and selecting the region proposal as a positive sample if an overlapping parameter of the region proposal is in a threshold range.
[0009] In a further variation, the overlapping parameter is a ratio of an overlapping region and a surrounding region of the region proposal. The overlapping region indicates a common region covered by both the region proposal and a set of internal bounding boxes within the target bounding box. A respective internal bounding box can include at least a part of the continuous region. The surrounding region indicates a total region covered by the region proposal and the set of internal bounding boxes
[0010] In a further variation, the system selects the set of internal bounding boxes based on one of an intersection with the region proposal, a distance from the region proposal, and a total number of internal bounding boxes in the target bounding box. [0011] In a further variation, the system generates a negative sample, which excludes any part of the first target, from the first image. To do so, the system can select the region proposal as the negative sample in response to determining that the overlapping parameter of the region proposal is in a low threshold range. The system may also select an area outside of the target bounding box in the first image as the negative sample
[0012] In a further variation, the system determines a set of subsequent region proposals in the area of the first image enclosed by the target bounding box. To do so, the system can apply a movement rule to a previous region proposal and terminate based on a termination condition.
[0013] In a variation on this embodiment, the system generates a second set of positive image samples. To do so, the system can select a positive image sample from a region proposal in a second target bounding box in the first image. The system may also change the size or shape of a bounding box of a region proposal of a previous round.
[0014] In a variation on this embodiment, the system optimizes the training of the target detector by generating a plurality of bounding boxes for a plurality of image samples in the set of positive image samples and combining the plurality of bounding boxes to generate a combined bounding box and a corresponding label. Here, a respective bounding box identifies the corresponding part of the continuous region.
BRIEF DESCRIPTION OF THE FIGURES
[0015] FIG. 1A illustrates exemplary infrastructure and environment facilitating an efficient assessment system, in accordance with an embodiment of the present application.
[0016] FIG. 1B illustrates exemplary training and operation of an efficient assessment system, in accordance with an embodiment of the present application.
[0017] FIG. 2 illustrates exemplary bounding boxes for generating image samples for training a target detection system of an efficient assessment system, in accordance with an embodiment of the present application.
[0018] FIG. 3A illustrates an exemplary region proposal generation process for generating image samples, in accordance with an embodiment of the present application.
[0019] FIG. 3B illustrates an exemplary assessment of a region proposal for generating image samples, in accordance with an embodiment of the present application.
[0020] FIG. 3C illustrates an exemplary determination of whether a region proposal can be an image sample, in accordance with an embodiment of the present application.
[0021] FIG. 4 illustrates an exemplary integration of detection results of multiple samples, in accordance with an embodiment of the present application. [0022] FIG. 5A presents a flowchart illustrating a method of an assessment system performing a damage assessment, in accordance with an embodiment of the present application.
[0023] FIG. 5B presents a flowchart illustrating a method of an assessment system generating image samples for training a target detection system, in accordance with an embodiment of the present application.
[0024] FIG. 5C presents a flowchart illustrating a method of an assessment system integrating detection results of multiple samples, in accordance with an embodiment of the present application.
[0025] FIG. 6 illustrates an exemplary computer system that facilitates an efficient assessment system, in accordance with an embodiment of the present application.
[0026] FIG. 7 illustrates an exemplary apparatus that facilitates an efficient assessment system, in accordance with an embodiment of the present application.
[0027] In the figures, like reference numerals refer to the same figure elements.
DETAILED DESCRIPTION
[0028] The following description is presented to enable any person skilled in the art to make and use the embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the embodiments described herein are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.
Overview
[0029] The embodiments described herein solve the problem of efficiently detecting a damage of a vehicle by (i) generating positive and negative image samples from a labeled image for training a target detection system; and (ii) integrating detection results of multiple image samples associated with a damage to increase the efficiency of the target detection system. In this way, an assessment system can use the target detection system to identify damages on a vehicle independent of the damaged parts and generate a repair plan based on the identification.
[0030] With existing technologies, an Al-based technique for determining vehicular damages from an image may include determining the damaged parts of a vehicle and the degree of the damages based on the similar images in historical image data. Another technique may involve identifying the area of a damaged part in the center of an input image through an identification method and comparing the area of the part with the historical image data to obtain a similar image. By comparing the obtained image with the input image, the technique may determine the degree of damages. However, these techniques are prone to interference from the additional information of the damaged part in the input image, a reflection of light, contaminants, etc. As a result, these techniques may operate with low accuracy while determining the degree of damages.
[0031] For example, for identifying damages on a vehicle using target detection, the technique typically needs to be trained with a certain number of positive samples and negative samples. Here, a certain number of images depicting the damages need to serve as the positive samples, and a certain number of images not depicting the damages need to serve as the negative samples. However, obtaining positive samples in sufficient numbers can be challenging.
Furthermore, a negative sample may include at least a segment of a damaged part and cause interference in the training process. As a result, the AI-based model may not be equipped to detect damages on a part of a vehicle, especially if the model has not been trained with similar damages on the part of the vehicle.
[0032] To solve this problem, embodiments described herein provide an assessment system that can identify a damaged area of a vehicle (i.e., a target) from one or more images of the vehicle and assess the degree of damage on the identified damaged area. The system can assess damage to a vehicle in two dimensions. The system can identify a part of the vehicle based on object detection in one dimension and determine the damage in another dimension. To determine the damage, the system can identify the damaged area based on the material on which the damage has been inflicted. Hence, the system can execute the damage detection independent of the underlying vehicle part. This allows the system to efficiently detect a damaged area on a vehicle without relying on how that damaged area may appear on a specific part of the vehicle.
[0033] To do so, the system can identify damages and the degree of damages on materials, such as the paint surface, plastic components, frosted components, glasses, lights, mirrors, etc., without requiring information of the underlying parts. As a result, the system can also be used for the identification of damages on similar materials in other scenarios (i.e., other than the damages on a vehicle). On the other hand, the system can independently identify one or more parts that may represent the damaged area. In this way, the system can identify the damaged area and the degree of damages, and the parts that construct the damaged area. Based on the damage information, the system then performs a damage assessment, determines a repair plan, and generates a cost estimate. For example, the system can estimate the cost and/or availability of the parts, determine whether a repair or replacement is needed based the degree of damage, determine the deductibles and fees, and schedule a repair operation based on calendar information of a repair shop.
[0034] However, the target detector (e.g., a deep-leaming network) can operate with high accuracy if the target detector is trained with sufficient number of positive and negative samples. In some embodiments, the system can also generate image samples from labeled images (e.g., images with labeled targets). A labeled image may at least include a target bounding box that can be hand-labeled in advance and a plurality of internal bounding boxes in the target bounding box. The target bounding box is used for surrounding a continuous region of a target (e.g., the largest continuous region of damage), and each of the plurality of internal bounding boxes surrounds a segment of the continuous region of the target.
[0035] During operation, the system can obtain the labeled images and determine region proposals for sampling in the target bounding box. The region proposal can be represented based on a pre-determined bounding box (e.g., with predetermined size and shape). This bounding box can be placed in the target bounding box based on a sliding window or an image segmentation algorithm. The system then compares the region proposal with the corresponding internal bounding boxes to determine overlapping parameters.
[0036] Based on whether the overlapping parameters are in a threshold range, the system may collect the region proposal as a positive sample for training the target detector. Otherwise, if the overlapping parameters are below a low threshold range, the system may collect the region proposal as a negative sample. In addition, the system can also collect negative samples from outside of the target bounding box to ensure that the negative sample does not include any damage information. In this way, the system can reduce interference and improve the accuracy of the target detector.
Exemplary System
[0037] FIG. 1A illustrates exemplary infrastructure and environment facilitating an efficient assessment system, in accordance with an embodiment of the present application. In this example, an infrastructure 100 can include an automated assessment environment 110.
Environment 110 can facilitate automated damage assessment in a distributed environment. Environment 110 can serve a client device 102 using an assessment server 130. Server 130 can communicate with client device 102 via a network 120 (e.g., a local or a wide area network, such as the Internet). Server 130 can include components such as a number of central processing unit (CPU) cores, a system memory (e.g., a dual in-line memory module), a network interface card (NIC), and a number of storage devices/disks. Server 130 can run a database system (e.g., a database management system (DBMS)) for maintaining database instances. [0038] Suppose that a user 104 needs to file an insurance claim regarding damage 124 on a vehicle. If the insurance company deploys an AI-based technique for automatically
determining damages from an image, user 104 may use client device 102 to capture an image 122 depicting damage 124. User 104 can then send an insurance claim 132 comprising image 122 as an input image from client device 102 via network 120. With existing technologies, the AI-based technique may determine the parts damaged by damage 124 and the degree of damage 124 based on the similar images in historical image data. Another technique may involve identifying the area of damage 124 in the center of input image 122 through an identification method and comparing the area of damage 124 with the historical image data to obtain a similar image. By comparing the obtained image with input image 122, the technique may determine the degree of damages.
[0039] However, these techniques are prone to interference from the additional information in input image 122, such as undamaged segments, a reflection of light, contaminants, etc. As a result, these techniques may operate with low accuracy while determining the degree of damages. Furthermore, these techniques typically need to be trained with a certain number of positive samples and negative samples. However, obtaining positive samples in sufficient numbers can be challenging. Furthermore, a negative sample may include interfering elements. As a result, the AI-based technique may not be equipped to detect damage 124, especially if the technique has not been trained with damages similar to damage 124.
[0040] To solve these problems, an automated assessment system 150 can efficiently and accurately identify the area and vehicle parts impacted by damage 124 (i.e., one or more targets) from image 122, and assess the degree of damage 124. System 150 can run on server 130 and communicate with client device 102 via network 120. In some embodiments, system 150 includes a target detector 160 that can assess damage 124 in two dimensions. Target detector 160 can identify a part of the vehicle impacted by damage 124 in one dimension and determine damage 124 in another dimension. Upon determining the damage, target detector 160 can apply geometric calculation and division to determine the degree of damage 124 as target 126 that can include the location of damage 124, the parts impacted by damage 124, and the degree of damage 124.
[0041] Furthermore, target detector 160 can identify the area or location of damage 124 based on the material on which the damage has been inflicted. As a result, target detector 160 can execute the damage detection independent of the underlying vehicle part. This allows target detector 160 to efficiently detect the area or location of damage 124 without relying on how that damaged area may appear on a specific part of the vehicle. In other words, target detector 160 can identify damage 124 and the degree of damage 124 on the material on which damage 124 appears without requiring information of the underlying parts. In addition, target detector 160 can independently identify one or more parts that may be impacted by damage 124. In this way, target detector 160 can identify the area and the degree of damage 124, and the parts impacted by damage 124.
[0042] Based on the damage information generated by target detector 160, system 150 then generates a damage assessment 134 to determine a repair plan and generate a cost estimate for user 102. System 150 can estimate the cost and/or availability of the parts impacted by damage 124, determine whether a repair or replacement is needed based the degree of damage 124, and schedule a repair operation based on calendar information of a repair shop. System 150 can then send assessment 134 to client device 102 via network 120.
[0043] Examples of target detector 160 include, but are not limited to, Faster Region- Convolutional Neural Network (R-CNN), You Only Look Once (YOLO), Single Shot MultiBox Detector (SSD), R-CNN, Lighthead R-CNN, and RetinaNet. In some embodiments, target detector 160 can reside on client device 102. Target detector 160 can then use a mobile end target detection technique, such as MobileNet+SSD.
[0044] FIG. 1B illustrates exemplary training and operation of an efficient assessment system, in accordance with an embodiment of the present application. Target detector 160 can operate with high accuracy if target detector 160 is trained with sufficient number of positive and negative samples. In some embodiments, system 150 can also generate image samples 170 from a labeled image 172, which can be an image with labeled targets. It should be noted that the image sampling and target detection can be executed on the same or different devices. Labeled image 172 may at least include a target bounding box 180 that can be hand-labeled in advance and a plurality of internal bounding boxes 182 and 184 in target bounding box 180. Target bounding box 180 is used for surrounding a continuous region of a target (e.g., the largest continuous region of damage), and each of internal bounding boxes 182 and 184 surrounds a segment of the continuous region of the target.
[0045] The labeling of on image 172 can indicate a damage definition, which can include the area and the class, associated with a respective continuous damage segment depicted in image 172. The various degrees of damages corresponding to the various materials are defined as the damage classes. For example, if the material is glass, the damage class can include minor scratches, major scratches, glass cracks, etc. The smallest area that includes a continuous segment of the damage is defined as the area of the damage segment. Therefore, for each continuous segment of the damage in image 172, the labeling can indicate the area and the class of the damage segment. Upon determining the damage definition of a respective damage segment, the labeling can indicate the damaged definition of the corresponding damage segment. With such labeling of a damage segment, the damage becomes related only to the material and not to a specific part.
[0046] During operation, system 150 can obtain labeled image 172 and determine region proposals for sampling in target bounding box 180. The region proposal can be represented based on a pre-determined bounding box (e.g., with predetermined size and shape). The bounding box of the region proposal can be placed in target bounding box 180 based on a sliding window or an image segmentation algorithm. System 150 then compares the region proposal with the corresponding internal bounding boxes to determine overlapping parameters (e.g., an intersection over union (IoU)). System 150 may only compare the region proposal with the internal bounding boxes that are within a distance threshold of the region proposal (e.g., 50 pixels) or have an intersection with the region proposal.
[0047] Based on whether the overlapping parameters are in a threshold range (e.g., greater than 0.7 or falls within 0.7-0.99), system 150 may collect the region proposal as a positive sample for training target detector 160. Otherwise, if the overlapping parameters are below a low threshold range (e.g., less than 0.1), system 150 may collect the region proposal as a negative sample. In addition, system 150 can also collect negative samples from outside of target bounding box 180 to ensure that the negative sample does not include any damage information.
In this way, system 150 can generate image samples 170 that can include accurate positive and negative samples. By training object detector 160 using image samples 170, system 150 can reduce the interference and improve the accuracy of target detector 160, thereby allowing target detector 160 to accurately detect target 124 from input image 122.
Image Sampling
[0048] FIG. 2 illustrates exemplary bounding boxes for generating image samples for training a target detection system of an efficient assessment system, in accordance with an embodiment of the present application. During operation, system 150 can receive an input image 200 for image sampling. Image 200 may depict a vehicular damage 220 (i.e., damage on a vehicle). A user 250 may label image 200 with one or more bounding boxes. Each of the bounding boxes can correspond to a label that indicates a damage definition (i.e., the area of the damage and the class of damage). The bounding boxes include at least one target bounding box and may include a set of internal bounding boxes located in the target bounding box.
[0049] User 250 may determine the largest continuous region of damage 220 and apply a target bounding box 202 on the largest continuous region. Here, a target bounding box surrounds a continuous region of damage. User 250 may start from the largest continuous region for determining a target bounding box, and continue with the next largest continuous region for a subsequent target bounding box in the same image. User 250 can then select a part of the continuous region in bounding box 202 with an internal bounding box 204. In the same way, user 250 can select internal bounding boxes 206 and 208 in bounding box 202.
[0050] It should be noted that, even though a bounding box typically takes a square or rectangular shape, the bounding box may take any other shape, such a triangular or oval shape. In this example, shapes and sizes of internal bounding boxes 204, 206, and 208 may take the same or different forms. Furthermore, two adjacent bounding boxes may or may not be joined and/or overlapping. In addition, the internal bounding boxes within target bounding box 202 may or may not cover the continuous region of damage in its entirety.
[0051] During operation, system 150 can then determine region proposals for sampling in target bounding box 202. The region proposal can be placed in target bounding box 202 based on a sliding window or an image segmentation algorithm. For collecting a positive sample, system 150 may determine a region proposal 212 in a region covered by a portion of damage 220. System 150 compares region proposal 212 with corresponding internal bounding boxes 204 and 206 to determine overlapping parameters. Based on whether the overlapping parameters are in a threshold range, system 150 may collect region proposal 212 as a positive sample.
[0052] Otherwise, if the overlapping parameters are below a low threshold range, system 150 may collect region proposal 212 as a negative sample. In addition, system 150 can also determine a region proposal 214 in a region of target bounding box 202 that may not include damage 220 (e.g., using segmentation). System 150 can also determine a region proposal 216 outside of target bounding box 202 to collect a negative sample. In this way, system 150 can use region proposals 214 and 216 for negative samples, thereby ensuring that the corresponding negative samples do not include any damage information.
[0053] FIG. 3A illustrates an exemplary region proposal generation process for generating image samples, in accordance with an embodiment of the present application. System 150 may determine a set of movement rules that determines the placement of a region proposal in target bounding box 202. System 150 can also receive the movement rules as input. The movement rules can be defined so that, from a current region proposal, a subsequent region proposal can be determined within the region enclosed by target bounding box 202. Such rules can include an initial position of a region proposal (i.e., the position of a bounding box corresponding to the region proposal), the deviation distance from a previous position for a movement, and a movement direction, and a movement termination condition. The movement termination condition can be based on one or more of: a number of region proposals and/or movements in a target bounding box and the region covered by the region proposals (e.g., a threshold region). [0054] Based on the movement rules, system 150 can determine a number of region proposals in target bounding box 202. In some embodiments, the upper left corner of target bounding box 202 is selected as the position for of the initial region proposal 302. The next region proposal 304 can be selected based on a movement from left to right along the left-to-right width of target bounding box 202. A predetermined step length can dictate how far region proposal 304 should be from region proposal 302. In this way, a sample can be generated for each movement.
[0055] In some further embodiments, the position of a region proposal in target bounding box 202 can be randomly selected. To do so, system 150 can randomly determine a reference point of region proposal 302 (e.g., a center or corner point of region proposal 302). The position of the reference point can be selected based on a movement range of the reference point (e.g., a certain distance between the reference point and the boundary of target bounding box 202 should be maintained). System 150 can then place region proposal 302 in target bounding box 202 based on a predetermined size of a region proposal with respect to the reference point.
[0056] FIG. 3B illustrates an exemplary assessment of a region proposal for generating image samples, in accordance with an embodiment of the present application. Suppose that system 150 has determined a region proposal 310 in target bounding box 202. To assess region proposal 310, system 150 determines the overlapping parameters that indicate the degree and/or proportion of overlap between region proposal 310 and the region enclosed by a respective internal bounding box in target bounding box 202. Since parts of the continuous region of damage 220 are represented by the internal bounding boxes, a high degree of overlap between region proposal 310 and the internal bounding boxes indicates that region proposal 310 includes a significant portion of damage 220. Based on this assessment, system 150 can select region proposal 310 as a positive sample.
[0057] When system 150 performs image sampling in the region enclosed by target bounding box 202, system 150 may compare region proposal 310 with a respective internal bounding box of target bounding box 202. This comparison can be executed based on an arrangement order of the internal bounding boxes, or based on the distance to region proposal 310 (e.g., from near to far). In some embodiments, system 150 may compare region proposal 310 only with the internal bounding boxes in the vicinity of region proposal 310. For example, system 150 can compare region proposal 310 only with the internal bounding boxes that are within a predetermined threshold distance (e.g., within a 50-pixel distance) or, have an intersection with region proposal 310 (e.g., internal bounding boxes 204, 206, and 208). In this way, system 150 can significantly reduce the volume of data processing. [0058] System 150 can determine whether region proposal 310 can be an image sample based on the overlapping parameters of region proposal 310 and a surrounding region. The surrounding region includes the total region enclosed by region proposal 310 and an internal bounding box that has been compared with region proposal 310. For example, if system 150 has compared region proposal 310 with internal bounding boxes 204, the surrounding region for region proposal 310 can be the region enclosed by internal bounding box 204 and region proposal 310. System 150 can then determine the overlapping parameters of region proposal 310 with respect to the internal region, and determine whether region proposal 310 can be an image sample. The overlapping parameters can indicate whether there is an overlapping, the overlapping degree, and the overlapping proportion.
[0059] FIG. 3C illustrates an exemplary determination of whether a region proposal can be an image sample, in accordance with an embodiment of the present application. In this example, system 150 determines whether a region proposal 354 can be selected as an image sample. System 150 can determine the surrounding region 356 (denoted with a gray grid) covered by internal bounding box 352 and region proposal 354. System 150 then determines the overlapping region 358 (denoted with a dark line) between region proposal 354 and internal bounding box 352. System 150 can then determine the overlapping parameters for internal bounding box 352 and region proposal 354 as a ratio of overlapping region 358 and surrounding region 356. In some embodiments, system 150 may determine the overlapping parameters for internal bounding box 352 and region proposal 354 as a ratio of overlapping region 358 and the region enclosed by internal bounding box 352.
[0060] In some embodiments, the ratio is determined based on an intersection over union (IoU) of the regions. If the regions are represented based on pixels, the ratio can be determined as the ratio of corresponding pixels. In the example in FIG. 3C, if one grid represents one pixel, the ratio can be calculated as pixels in overlapping region 358/ pixels in surrounding region 356=16/122. If the ratio is larger than a predetermined threshold (e.g., 0.7) or falls within a threshold range (e.g., 0.7-0.99), system 150 can select the region proposal as a positive sample.
If system 150 compares a region proposal with a plurality of internal bounding boxes, system 150 can generate a set of overlapping parameters. System 150 can select the region proposal as a positive sample if the largest value of the set of overlapping parameters is larger than a threshold or falls within a threshold range.
[0061] In some embodiments, the size and shape of the bounding box of a region proposal may be adjusted to obtain another bounding box. System 150 can then perform another round of sampling using the new bounding box. If there is another target bounding box in the image, system 150 can use the same method for image sampling in that target bounding box. In this way, system 150 can perform image sampling for each target bounding box in an image.
Each target bounding box may allow system 150 to determine a plurality of region proposals for sampling. For each region proposal, system 150 can determine whether to select the region proposal as a positive sample for training a target detector.
[0062] Optionally, system 150 does not select a region proposal as a positive sample (i.e., the overlapping parameters have not met the condition), system 150 may further screen the region proposal as a potential negative sample. For example, system 150 may select the region proposal as a negative sample if the ratio of the region proposal for each corresponding internal bounding box is below a low threshold range (e.g., 0.1). In other words, if the comparison results of a region proposal and the regions enclosed by all corresponding internal bounding boxes meet the condition for a negative sample, system 150 can select the region proposals as a negative sample. Furthermore, system 150 can place a region proposal outside of any target bounding box to collect a negative sample. Since each continuous damage region is covered by a
corresponding target bounding box, a region proposal outside of any target bounding box can be selected as a negative sample. In this way, system 150 can reduce noise interference in a negative sample.
Efficient Target Detector
[0063] FIG. 4 illustrates an exemplary integration of detection results of multiple samples, in accordance with an embodiment of the present application. Suppose that system 150 has obtained positive samples 402, 404, and 406 from an input image 400. System 150 can represent samples 402, 404, and 406 as inputs 412, 414, and 416, respectively, for target detector 160 of system 150. System 150 can use target detector 160 to generate corresponding outputs 422, 424, and 426. Each of these outputs can include a characteristic description of the corresponding input. The characteristic description can include a feature vector, a label (e.g., based on the damage class), and a corresponding bounding box.
[0064] System 150 can then construct a splice of outputs 422, 424, and 426 in the bounding box dimension. For example, system 150 can perform a concatenation 432 of the bounding boxes to improve the accuracy of target detector 160. Target detector 160 can be further trained and optimized based on a Gradient Boosted Decision Trees (GBDT) model 434. GBDT model 434 can optimize the concatenated bounding boxed and generate a corresponding target indicator 436, which can include an optimized bounding box and a corresponding label, for the damage depicted in samples 402, 404, and 406. In this way, the efficiency and accuracy of target detector 160 can be further improved. Operations
[0065] FIG. 5A presents a flowchart 500 illustrating a method of an assessment system performing a damage assessment, in accordance with an embodiment of the present application. During operation, the system receives an input image indicating damage on a vehicle (operation 502) and performs target detection on the input image (operation 504). The system then determines the damage information (e.g., the degree of the damage and the parts impacted by the damage) based on the target detection (operation 506). The system assesses the damage based on the damage information (e.g., whether the parts can be repaired or would need replacement) (operation 508). Subsequently, the system determines a repair plan and cost estimate based on the damage assessment and insurance information (operation 510).
[0066] FIG. 5B presents a flowchart 530 illustrating a method of an assessment system generating image samples for training a target detection system, in accordance with an embodiment of the present application. During operation, the system obtains an image for sampling (operation 532) and retrieves a target bounding box and a set of internal bounding boxes in the target bounding box in the obtained image (operation 534). The system then determines a region proposal in the target bounding box based on a set of movement criteria (operation 536) and determines overlapping parameters (e.g., IoUs) associated with the region proposal and the corresponding internal bounding boxes (operation 538).
[0067] Subsequently, the system determines whether the overlapping parameters are in the threshold range (operation 540). If the overlapping parameters are in the threshold range, the system can select the region proposal as a positive sample (operation 542). If the overlapping parameters are not in the threshold range, the system determines whether the overlapping parameters are below a low threshold range (operation 544). If the overlapping parameters are below a low threshold range, the system can select the region proposal as a negative sample (operation 546).
[0068] Upon selecting the region proposal as a sample (operation 542 or 546), or if the overlapping parameters are not below a low threshold range (operation 544), the system determines whether the target bounding box has been fully sampled (i.e., the set of movement criteria has met a termination condition) (operation 548). If the target bounding box has not been fully sampled, the system determines another region proposal in the target bounding box based on the set of movement criteria (operation 536). If the target bounding box has been fully sampled, the system determines whether the input image has been fully sampled (i.e., all target bounding boxes have been sampled) (operation 550). If the input image has not been fully sampled, the system retrieves another target bounding box and another set of internal bounding boxes in the target bounding box in the obtained image (operation 534). [0069] FIG. 5C presents a flowchart 560 illustrating a method of an assessment system integrating detection results of multiple samples, in accordance with an embodiment of the present application. During operation, the system obtains a training image indicating a damaged area of a vehicle (i.e., a vehicular damage) (operation 562) and a sample associated with the damage in the image (operation 564). The system then generates corresponding output comprising features (e.g., a feature vector), a bounding box, and a corresponding label using an AI model (e.g., the target detector) (operation 566). The system then checks whether all samples are iterated (operation 570).
[0070] If all samples have not been iterated, the system continues to determine another sample associated with the damage in the image (operation 564). If all samples have been iterated, the system performs bounding box concatenation based on the generated outputs to generate a characteristic description of the damage (operation 572). The system then trains and optimizes using a GBDT model (operation 574) and obtains a bounding box and a corresponding label representing the damage based on the training and optimization (operation 576).
Exemplary Computer System and Apparatus
[0071] FIG. 6 illustrates an exemplary computer system that facilitates an efficient assessment system, in accordance with an embodiment of the present application. Computer system 600 includes a processor 602, a memory device 604, and a storage device 608. Memory device 604 can include volatile memory (e.g., a dual in-line memory module (DIMM)).
Furthermore, computer system 600 can be coupled to a display device 610, a keyboard 612, and a pointing device 614. Storage device 608 can be a hard disk drive (HDD) or a solid-state drive (SSD). Storage device 608 can store an operating system 616, a damage assessment system 618, and data 636. Damage assessment system 618 can facilitate the operations of system 150.
[0072] Damage assessment system 618 can include instructions, which when executed by computer system 600 can cause computer system 600 to perform methods and/or processes described in this disclosure. Specifically, damage assessment system 618 can include instructions for generating region proposals in a target bounding box of an input image (region proposal module 620). Damage assessment system 618 can also include instructions for calculating overlapping parameters for the region proposal (parameter module 622).
Furthermore, damage assessment system 618 includes instructions for determining whether a region proposal can be a positive or a negative sample based on corresponding thresholds (sampling module 624).
[0073] Damage assessment system 618 can also include instructions for training and optimizing using a GBDT model (response module 626). Moreover, damage assessment system 618 includes instructions for assessing a damaged area (i.e., a target) of a vehicle from an input image and generating a repair plan (planning module 628). Damage assessment system 618 may further include instructions for sending and receiving messages (communication module 630). Data 636 can include any data that can facilitate the operations of damage assessment system 618, such as labeled images and generated samples.
[0074] FIG. 7 illustrates an exemplary apparatus that facilitates an efficient assessment system, in accordance with an embodiment of the present application. Damage assessment apparatus 700 can comprise a plurality of units or apparatuses which may communicate with one another via a wired, wireless, quantum light, or electrical communication channel. Apparatus 700 may be realized using one or more integrated circuits, and may include fewer or more units or apparatuses than those shown in FIG. 7. Further, apparatus 700 may be integrated in a computer system, or realized as a separate device that is capable of communicating with other computer systems and/or devices. Specifically, apparatus 700 can include units 702-712, which perform functions or operations similar to modules 620-630 of computer system 600 of FIG. 6, including: a region proposal unit 702; a parameter unit 704; a sampling unit 706; an optimization unit 708; a planning unit 710; and a communication unit 712.
[0075] The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disks, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.
[0076] The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium.
[0077] Furthermore, the methods and processes described above can be included in hardware modules. For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules. [0078] The foregoing embodiments described herein have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the embodiments described herein to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the embodiments described herein. The scope of the embodiments described herein is defined by the appended claims.

Claims

What Is Claimed Is:
1. A method for facilitating image sampling for training a target detector, comprising:
obtaining a first image depicting a first target, wherein a continuous part of the first target in the first image is labeled and enclosed in a target bounding box;
generating a set of positive image samples from an area of the first image enclosed by the target bounding box, wherein a respective positive image sample includes at least a part of the first target; and
training the target detector with the set of positive image samples to detect a second target from a second image, wherein the target detector is an artificial intelligence (AI) model capable of detecting an object.
2. The method of claim 1, wherein the first and second targets indicate first and second vehicular damages, respectively, and wherein the label of the continuous part indicates a material impacted by the first vehicular damage.
3. The method of claim 2, wherein detecting the second target comprises detecting the second vehicular damage based on a corresponding material independent of identifying a part of a vehicle impacted by the second vehicular damage.
4. The method of claim 1, wherein generating the set of positive image samples comprises:
determining a region proposal in the area of the first image enclosed by the target bounding box; and
selecting the region proposal as a positive sample in response to determining that an overlapping parameter of the region proposal is in a threshold range.
5. The method of claim 4, wherein the overlapping parameter is a ratio of an overlapping region and a surrounding region of the region proposal,
wherein the overlapping region indicates a common region covered by both the region proposal and a set of internal bounding boxes within the target bounding box, wherein a respective internal bounding box includes at least a part of the continuous region, and
wherein the surrounding region indicates a total region covered by the region proposal and the set of internal bounding boxes.
6. The method of claim 5, further comprising selecting the set of internal bounding boxes based on one of:
an intersection with the region proposal;
a distance from the region proposal; and
a total number of internal bounding boxes in the target bounding box.
7. The method of claim 4, further comprising generating a negative sample, which excludes any part of the first target, from the first image by one or more of:
selecting the region proposal as the negative sample in response to determining that the overlapping parameter of the region proposal is in a low threshold range; and
selecting an area outside of the target bounding box in the first image as the negative sample.
8. The method of claim 4, further comprising determining a set of subsequent region proposals in the area of the first image enclosed by the target bounding box by:
applying a movement rule to a previous region proposal; and
terminating based on a termination condition.
9. The method of claim 1 , further comprising generating a second set of positive image samples by one or more of:
selecting a positive image sample from a region proposal in a second target bounding box in the first image; and
changing a size or shape of a bounding box of a region proposal of a previous round.
10. The method of claim 1, further comprising optimizing the training of the target detector by:
generating a plurality of bounding boxes for a plurality of image samples in the set of positive image samples, wherein a respective bounding box identifies the corresponding part of the continuous region; and
combining the plurality of bounding boxes to generate a combined bounding box and a corresponding label.
11. A non-transitory computer-readable storage medium storing instructions that when executed by a computer, cause the computer to perform a method for facilitating image sampling for training a target detector, the method comprising:
obtaining a first image depicting a first target, wherein a continuous part of the first target in the first image is labeled and enclosed in a target bounding box;
generating a set of positive image samples from an area of the first image enclosed by the target bounding box, wherein a respective positive image sample includes at least a part of the first target; and
training the target detector with the set of positive image samples to detect a second target from a second image, wherein the target detector is an artificial intelligence (AI) model capable of detecting an object.
12. The non-transitory computer-readable storage medium of claim 11, wherein the first and second targets indicate first and second vehicular damages, respectively, and wherein the label of the continuous part indicates a material impacted by the first vehicular damage.
13. The non-transitory computer-readable storage medium of claim 12, wherein detecting the second target comprises detecting the second vehicular damage based on a corresponding material independent of identifying a part of a vehicle impacted by the second vehicular damage.
14. The non-transitory computer-readable storage medium of claim 11, wherein generating the set of positive image samples comprises:
determining a region proposal in the area of the first image enclosed by the target bounding box; and
selecting the region proposal as a positive sample in response to determining that an overlapping parameter of the region proposal is in a threshold range.
15. The non-transitory computer-readable storage medium of claim 14, wherein the overlapping parameter is a ratio of an overlapping region and a surrounding region of the region proposal,
wherein the overlapping region indicates a common region covered by both the region proposal and a set of internal bounding boxes within the target bounding box, wherein a respective internal bounding box includes at least a part of the continuous region, and
wherein the surrounding region indicates a total region covered by the region proposal and the set of internal bounding boxes.
16. The non-transitory computer-readable storage medium of claim 15, wherein the method further comprises selecting the set of internal bounding boxes based on one of:
an intersection with the region proposal;
a distance from the region proposal; and
a total number of internal bounding boxes in the target bounding box.
17. The non-transitory computer-readable storage medium of claim 14, wherein the method further comprises generating a negative sample, which excludes any part of the first target, from the first image by one or more of:
selecting the region proposal as the negative sample in response to determining that the overlapping parameter of the region proposal is in a low threshold range; and
selecting an area outside of the target bounding box in the first image as the negative sample.
18. The non-transitory computer-readable storage medium of claim 14, wherein the method further comprises determining a set of subsequent region proposals in the area of the first image enclosed by the target bounding box by:
applying a movement rule to a previous region proposal; and
terminating based on a termination condition.
19. The non-transitory computer-readable storage medium of claim 11, wherein the method further comprises generating a second set of positive image samples by one or more of: selecting a positive image sample from a region proposal in a second target bounding box in the first image; and
changing a size or shape of a bounding box of a region proposal of a previous round.
20. The non-transitory computer-readable storage medium of claim 11 , wherein the method further comprises optimizing the training of the target detector by:
generating a plurality of bounding boxes for a plurality of image samples in the set of positive image samples, wherein a respective bounding box identifies the corresponding part of the continuous region; and
combining the plurality of bounding boxes to generate a combined bounding box and a corresponding label.
PCT/US2019/050082 2018-09-07 2019-09-06 Method and computer-readable storage medium for generating training samples for training a target detector WO2020051545A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19773612.7A EP3847579A1 (en) 2018-09-07 2019-09-06 Method and computer-readable storage medium for generating training samples for training a target detector
SG11202012526SA SG11202012526SA (en) 2018-09-07 2019-09-06 Method and computer-readable storage medium for generating training samples for training a target detector

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201811046061.6 2018-09-07
CN201811046061.6A CN110569699B (en) 2018-09-07 2018-09-07 Method and device for carrying out target sampling on picture
US16/563,634 2019-09-06
US16/563,634 US11069048B2 (en) 2018-09-07 2019-09-06 System and method for facilitating efficient damage assessments

Publications (2)

Publication Number Publication Date
WO2020051545A1 true WO2020051545A1 (en) 2020-03-12
WO2020051545A9 WO2020051545A9 (en) 2020-10-22

Family

ID=68051921

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/050082 WO2020051545A1 (en) 2018-09-07 2019-09-06 Method and computer-readable storage medium for generating training samples for training a target detector

Country Status (1)

Country Link
WO (1) WO2020051545A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284107A (en) * 2021-05-25 2021-08-20 重庆邮电大学 Attention mechanism-induced improved U-net concrete crack real-time detection method
US11452322B2 (en) 2015-11-16 2022-09-27 Q Sports Science, LLC Traumatic brain injury protection devices
US11478253B2 (en) 2013-03-15 2022-10-25 Tbi Innovations Llc Methods and devices to reduce the likelihood of injury from concussive or blast forces
US11696766B2 (en) 2009-09-11 2023-07-11 Tbi Innovations, Llc Methods and devices to reduce damaging effects of concussive or blast forces on a subject
CN117237697A (en) * 2023-08-01 2023-12-15 北京邮电大学 Small sample image detection method, system, medium and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017059576A1 (en) * 2015-10-09 2017-04-13 Beijing Sensetime Technology Development Co., Ltd Apparatus and method for pedestrian detection
CN108090838A (en) * 2017-11-21 2018-05-29 阿里巴巴集团控股有限公司 Identify method, apparatus, server, client and the system of damaged vehicle component

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017059576A1 (en) * 2015-10-09 2017-04-13 Beijing Sensetime Technology Development Co., Ltd Apparatus and method for pedestrian detection
CN108090838A (en) * 2017-11-21 2018-05-29 阿里巴巴集团控股有限公司 Identify method, apparatus, server, client and the system of damaged vehicle component

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HYUNGTAE LEE ET AL: "Fast Object Localization Using a CNN Feature Map Based Multi-Scale Search", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 12 April 2016 (2016-04-12), XP080695042 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11696766B2 (en) 2009-09-11 2023-07-11 Tbi Innovations, Llc Methods and devices to reduce damaging effects of concussive or blast forces on a subject
US11478253B2 (en) 2013-03-15 2022-10-25 Tbi Innovations Llc Methods and devices to reduce the likelihood of injury from concussive or blast forces
US11452322B2 (en) 2015-11-16 2022-09-27 Q Sports Science, LLC Traumatic brain injury protection devices
CN113284107A (en) * 2021-05-25 2021-08-20 重庆邮电大学 Attention mechanism-induced improved U-net concrete crack real-time detection method
CN117237697A (en) * 2023-08-01 2023-12-15 北京邮电大学 Small sample image detection method, system, medium and equipment

Also Published As

Publication number Publication date
WO2020051545A9 (en) 2020-10-22

Similar Documents

Publication Publication Date Title
US11069048B2 (en) System and method for facilitating efficient damage assessments
WO2020051545A1 (en) Method and computer-readable storage medium for generating training samples for training a target detector
CN107358596B (en) Vehicle loss assessment method and device based on image, electronic equipment and system
CN107392218B (en) Vehicle loss assessment method and device based on image and electronic equipment
CN107403424B (en) Vehicle loss assessment method and device based on image and electronic equipment
US11080839B2 (en) System and method for training a damage identification model
CN109117831B (en) Training method and device of object detection network
WO2020047420A1 (en) Method and system for facilitating recognition of vehicle parts based on a neural network
US10657707B1 (en) Photo deformation techniques for vehicle repair analysis
US20180260793A1 (en) Automatic assessment of damage and repair costs in vehicles
US20200034958A1 (en) Automatic Image Based Object Damage Assessment
US10373262B1 (en) Image processing system for vehicle damage
CN110008956B (en) Invoice key information positioning method, invoice key information positioning device, computer equipment and storage medium
US10380696B1 (en) Image processing system for vehicle damage
US20200074215A1 (en) Method and system for facilitating detection and identification of vehicle parts
WO2020046960A1 (en) System and method for optimizing damage detection results
CN110264444B (en) Damage detection method and device based on weak segmentation
US8023766B1 (en) Method and system of processing an image containing undesirable pixels
EP3776408A1 (en) Method and apparatus for vehicle damage identification
US9846929B2 (en) Fast density estimation method for defect inspection application
CN107004266A (en) The method for detecting defect on surface of tyre
CN114820679B (en) Image labeling method and device electronic device and storage medium
WO2020047316A1 (en) System and method for training a damage identification model
US11087450B1 (en) Wheel matcher
JP2009151759A (en) Image processing method and image processor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19773612

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019773612

Country of ref document: EP

Effective date: 20210407