CN113780435B - Vehicle damage detection method, device, equipment and storage medium - Google Patents

Vehicle damage detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113780435B
CN113780435B CN202111080117.1A CN202111080117A CN113780435B CN 113780435 B CN113780435 B CN 113780435B CN 202111080117 A CN202111080117 A CN 202111080117A CN 113780435 B CN113780435 B CN 113780435B
Authority
CN
China
Prior art keywords
damage
damaged
vehicle
detection
integrated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111080117.1A
Other languages
Chinese (zh)
Other versions
CN113780435A (en
Inventor
方起明
刘莉红
刘玉宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111080117.1A priority Critical patent/CN113780435B/en
Publication of CN113780435A publication Critical patent/CN113780435A/en
Priority to PCT/CN2022/071075 priority patent/WO2023040142A1/en
Application granted granted Critical
Publication of CN113780435B publication Critical patent/CN113780435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a vehicle damage detection method, a device, equipment and a storage medium, relating to the technical field of artificial intelligent image processing, wherein the method comprises the following steps: and obtaining a damaged vehicle image, inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged part information, obtaining a damaged position corresponding to the damaged vehicle image according to a vehicle part segmentation model, and obtaining damaged information of the vehicle according to the damaged position and the damaged part information. Compared with a plurality of damage detection models separated in the related art, the embodiment of the application can obtain the corresponding damage part information through one-time forward processing by using the integrated damage assessment model, so that the occupied calculation resources are greatly reduced, the consumed calculation time is greatly reduced, the detection efficiency is effectively improved, and the labor cost in the damage assessment process of the vehicle is reduced. And the damaged position corresponding to the damaged vehicle image is obtained by combining the vehicle part segmentation models, so that the damaged information of the vehicle containing the positioning information can be obtained.

Description

Vehicle damage detection method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a vehicle damage detection method, device, equipment and storage medium.
Background
Determining the location and extent of damage to a damaged vehicle during a traffic accident is a very important task. The traditional damage assessment method relies on manual judgment, is low in efficiency and is easy to bring errors due to personal factors of damage assessors. In recent years, with the development of artificial intelligence technology, among them, artificial intelligence (Artificial Intelligence, AI) is a theory, method, technology and application system that simulates, extends and expands human intelligence using a digital computer or a machine controlled by a digital computer, senses an environment, acquires knowledge and uses the knowledge to obtain an optimal result. Some institutions have adopted computer vision (target detection, etc.) based methods to intelligently determine damage to damaged vehicles, and these methods reduce the dependence of damage determination work on manpower to a certain extent.
However, due to the complex and changeable damage scene, the large number of vehicle parts, the large number of damage forms, and the like, it is often necessary to develop a large number of damage detection models, such as a plurality of different damage detection models for sheet metal parts, glass, tires, and the like, according to the materials of the parts. The scheme has obvious defects, firstly, in the model training and developing stage, a plurality of damage data sets are required to be prepared and marked, a plurality of damage detection models are respectively trained and optimized, the process is complex, the efficiency is low, more manpower and material resource investment is required, and meanwhile, the model obtained by training can only be used for corresponding part types, and the data recycling rate is low and the robustness is poor; and secondly, in the model deployment online stage, a plurality of models need to consume more computing resources, and meanwhile, the scheduling difficulty and the cost are increased.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the application provides a vehicle damage detection method, device, equipment and storage medium, which can effectively improve detection efficiency and detection precision and reduce labor cost in the process of vehicle damage assessment.
In a first aspect, an embodiment of the present application provides a vehicle damage detection method, including:
acquiring a damaged vehicle image;
inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damage component information, wherein the integrated damage assessment model comprises:
the shared backbone neural network is integrated by a damage detection model corresponding to at least one component type, and is used for preprocessing the damaged vehicle image to obtain first output data;
the damage detection classification layer is used for processing the first output data to obtain damaged part information;
inputting the damaged vehicle image into a vehicle part segmentation model to obtain a damaged position corresponding to the damaged vehicle image;
And positioning the damaged part information according to the damaged position to obtain damaged information of the vehicle.
In an alternative implementation, the shared backbone neural network is a depth residual neural network, the depth residual neural network comprising: res-Net50 network, res-Net101 network, res-Net110 network, or Res-Net152 network;
inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged part information, wherein the method comprises the following steps of:
performing residual feature vector extraction processing on the damaged vehicle image based on each residual block which is sequentially connected in the depth residual neural network to obtain first output data; wherein, any residual block comprises an identity mapping and at least two convolution layers, and the identity mapping of any residual block points to the output end of any residual block from the input end of any residual block;
and inputting the first output data to a damage detection classification layer corresponding to the part type to obtain damaged part information.
In an alternative implementation, the integrated impairment model is trained by the following training process:
acquiring a damage data set corresponding to at least one component type as a training data set, wherein the training data set comprises a corresponding damage judgment label;
Inputting the training data set into the shared backbone neural network to obtain characteristic data;
inputting the characteristic data into a damage detection classification layer corresponding to the part type to obtain a damage part information detection result;
and training to obtain the integrated damage assessment model according to the detection error between the damage part information detection result and the damage judgment label.
In an optional implementation manner, the damage detection model corresponding to each component type includes a loss function corresponding to the damage detection model, and training to obtain the integrated damage assessment model according to the detection error between the damage component information detection result and the damage judgment label further includes:
and adjusting parameters in the integrated loss assessment model according to the detection error until the loss function meets a convergence condition, so as to obtain the integrated loss assessment model.
In an optional implementation manner, the acquiring, as the training data set, the damage data set corresponding to the at least one component type includes:
and uniformly sampling the damage data set corresponding to each component type by adopting a uniform sampling strategy to obtain a training data set so as to balance the sample quantity among the loss data sets corresponding to different component types.
In an optional implementation manner, the acquiring, as the training data set, the damage data set corresponding to the at least one component type includes:
when the sample number difference between the damage data sets corresponding to different component types is large, the inter-class balance sampling strategy is adopted to carry out oversampling on the loss data set corresponding to the component type with small sample number so as to obtain a training data set.
In an alternative implementation, the component types include: sheet metal parts, glass and tires;
the shared backbone neural network is integrated by a sheet metal part damage detection model, a glass damage detection model and a tire damage detection model;
the damage detection classification layer includes: sheet metal part damage detection classification layer, glass damage detection classification layer or tire damage detection classification layer;
the damaged part information includes: the name of the damaged part, the damage status and the damage degree;
the damaged part name includes: one or more of a sheet metal part, glass or tire;
the damage information includes: one or more of scoring, scraping, sagging, creasing, tearing, missing, or cracking;
the damage degree comprises: one or more of mild injury, moderate injury, or severe injury.
In a second aspect, an embodiment of the present application provides a vehicle damage detection device, including:
the image acquisition module is used for acquiring damaged vehicle images;
the damaged part information determining module is used for inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged part information, and the integrated damage assessment model comprises:
the shared backbone neural network is integrated by a damage detection model corresponding to at least one component type, and is used for preprocessing the damaged vehicle image to obtain first output data;
the damage detection classification layer is used for processing the first output data to obtain damaged part information;
the image segmentation module is used for inputting the damaged vehicle image into a vehicle part segmentation model to obtain a damage position corresponding to the damaged vehicle image;
and the damage information synthesis module is used for positioning and obtaining damage information of the vehicle from the damage part information according to the damage position.
In a third aspect, a computer device includes a processor and a memory;
The memory is used for storing programs;
the processor is configured to execute the vehicle damage detection method according to any one of the first aspects according to the program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions for performing the vehicle damage detection method according to any one of the first aspects.
Compared with the related art, the method for detecting the damage of the vehicle has the advantages that the damaged vehicle image is acquired and is input into a trained integrated damage assessment model to obtain damage part information, the damage position corresponding to the damaged vehicle image is obtained according to the vehicle part segmentation model, and the damage information of the vehicle is obtained according to the damage position and the damage part information. Wherein the integrated impairment model comprises: the shared backbone neural network is integrated by the damage detection model corresponding to the at least one component type, the integrated mode is convenient for model optimization training, and the labeling data of all the damage types can be used during training, so that the robustness of the model is effectively improved, and the possibility of overfitting is reduced. Compared with a plurality of damage detection models separated in the related art, the embodiment of the application can obtain the corresponding damage part information through one-time forward processing by using the integrated damage assessment model, so that the occupied calculation resources are greatly reduced, the consumed calculation time is greatly reduced, the detection efficiency is effectively improved, and the labor cost in the damage assessment process of the vehicle is reduced. In addition, according to the embodiment of the application, after the damaged positions corresponding to the damaged vehicle images are obtained through the vehicle part segmentation model are combined, the damaged information of the vehicle containing the positioning information can be obtained; on the other hand, because the damage information of the vehicle contains the damage position of the component, the damage false detection of the background area can be filtered, and the detection precision of the damage detection of the vehicle is further improved.
It is to be understood that the advantages of the second to fourth aspects compared with the related art are the same as those of the first aspect compared with the related art, and reference may be made to the related description in the first aspect, which is not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the related art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to a person having ordinary skill in the art.
FIG. 1 is a schematic diagram of an exemplary system architecture provided by one embodiment of the present application;
FIG. 2 is a flow chart of a method for vehicle damage detection provided in one embodiment of the present application;
FIG. 3 is a block diagram of a related art vehicle detection model;
FIG. 4 is a block diagram of an integrated impairment model provided by one embodiment of the present application;
FIG. 5 is a flow chart of a method for vehicle damage detection provided in one embodiment of the present application;
FIG. 6 is a flow chart of training an integrated impairment model in a vehicle impairment detection method according to one embodiment of the present application;
FIG. 7 is a further flow chart of a method for detecting vehicle damage according to one embodiment of the present application;
fig. 8 is a block diagram of a vehicle damage detection device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the embodiments of the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the embodiments of the present application with unnecessary detail.
It should be noted that although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different from that in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
It should also be appreciated that references to "one embodiment" or "some embodiments" or the like described in the specification of embodiments of the present application mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Automobiles are tools with high use frequency in land traffic, and are inevitably damaged under the influence of factors such as weather, road conditions, driver skills and the like, especially in traffic accidents. The determination of the damaged location and extent of damage to a damaged vehicle is therefore a very important task, since it can affect not only the determination of the subsequent vehicle repair plan, but also the economic reimbursement credit confirmation of the accident-related party. The traditional damage assessment method relies on manual judgment, is low in efficiency and is easy to bring errors due to personal factors of damage assessors. In recent years, with the development of artificial intelligence technology, some institutions have adopted methods based on computer vision (target detection, etc.) to intelligently determine damage to damaged vehicles, and these methods reduce the dependence of damage determination work on manpower to a certain extent.
Generally, in the design and development process of such a vehicle intelligent damage assessment system, due to the complex and changeable damage scene, the large number of vehicle components, the large number of damage forms, and the like, it is often necessary to develop a large number of damage detection models, such as damage detection models for sheet metal parts, glass, tires, and the like, according to the materials of the components. The method has the advantages that the defects of the scheme are obvious, firstly, in the model training development stage, a plurality of damage data sets are required to be marked and prepared, a plurality of damage detection models are respectively trained and optimized, the process of separately training different damage detection models is complex, the efficiency is low, more manpower and material resource investment is required, and meanwhile, the robustness of the model obtained through training is poor due to the fact that the repeated utilization rate of data is low. And secondly, in the model deployment online stage, a plurality of models need to consume more computing resources, and meanwhile, the scheduling difficulty and the cost are increased.
The embodiment of the application improves the defects, provides a vehicle damage detection method, and inputs damaged vehicle images into a trained integrated damage assessment model to obtain damage part information by acquiring the damaged vehicle images, obtains damage positions corresponding to the damaged vehicle images according to a vehicle part segmentation model, and obtains damage information of a vehicle according to the damage positions and the damage part information. Wherein the integrated impairment model comprises: the shared backbone neural network is integrated by the damage detection model corresponding to the at least one component type, the integrated mode is convenient for model optimization training, and the labeling data of all the damage types can be used during training, so that the robustness of the model is effectively improved, and the possibility of overfitting is reduced. Compared with a plurality of damage detection models separated in the related art, the integrated damage assessment model can obtain all damage component information through one-time forward reasoning, the occupied calculation resources are greatly reduced, the consumed calculation time is greatly reduced, the detection efficiency is effectively improved, and the labor cost in the vehicle damage assessment process is reduced. The damaged positions corresponding to the damaged vehicle images are obtained through the vehicle part segmentation model and combined, and then the damaged information of the vehicle containing the positioning information can be obtained; on the other hand, since the damage position of the member is included, damage false detection in the background area can be filtered, and the detection accuracy of the damage detection of the vehicle can be further improved.
Embodiments of the present application are further described below with reference to the accompanying drawings.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of an embodiment of the present invention may be applied.
As shown in fig. 1, the system architecture 100 may include a terminal device (e.g., one or more of a desktop computer 101, a tablet 102, and a portable computer 103 as shown in fig. 1, although other terminal devices with display screens, etc.), a network 104, and a server 105. The network 104 is the medium used to provide communication links between the terminal devices and the server 105. The network 104 may include various connection types, such as wired communication links, wireless communication links, and the like.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
For example, the server 105 may be a stand-alone server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms, or a server cluster formed by a plurality of servers, and so on.
In one embodiment of the present invention, the user may upload the damaged vehicle image, which may include damaged vehicle images acquired at different angles and different distances for different locations, or may be a single photograph, to the server 105 using the terminal device 101 (may also be the terminal device 102 or 103). After acquiring these damaged vehicle images, the server 105 inputs the damaged vehicle images into a trained integrated damage assessment model to obtain damage component information, obtains damage positions corresponding to the damaged vehicle images according to the vehicle component segmentation model, and obtains damage information of the vehicle according to the damage positions and the damage component information. The detection efficiency can be effectively improved, and the labor cost in the process of vehicle damage assessment is reduced. Meanwhile, due to the fact that the damage position of the component is included, damage false detection of a background area can be filtered, and detection accuracy of damage detection of the vehicle is further improved.
It should be noted that, the method for detecting vehicle damage provided in the embodiment of the present invention is generally executed by the server 105, and accordingly, the device for detecting vehicle damage is generally disposed in the server 105. However, in other embodiments of the present invention, the terminal device may also have a similar function as the server, so as to execute the diagnostic image processing scheme provided in the embodiments of the present invention.
The system architecture and the application scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of the system architecture and the appearance of a new application scenario, the technical solution provided by the embodiments of the present application is equally applicable to similar technical problems. Those skilled in the art will appreciate that the system architecture shown in fig. 1 is not limiting of the embodiments of the present application, and may include more or fewer components than shown, or certain components in combination, or a different arrangement of components.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Based on the above system architecture, various embodiments of the vehicle damage detection method of the embodiments of the present application are presented.
As shown in fig. 2, fig. 2 is a flowchart of a method for detecting vehicle damage according to an embodiment of the present application, including, but not limited to, step S110 and step S140.
Step S110, a damaged vehicle image is acquired.
And step S120, inputting the damaged vehicle image into the trained integrated damage assessment model to obtain damaged part information.
Step S130, inputting the damaged vehicle image into the vehicle part segmentation model to obtain a damaged position corresponding to the damaged vehicle image.
And step S140, positioning and obtaining damage information of the vehicle from the damage part information according to the damage position.
It is to be understood that the above steps S120 and S130 are not sequentially performed, and the steps S120 and S130 may be performed simultaneously, or the steps S130 and S120 may be performed first, which is not a limitation of the execution sequence in this embodiment.
In one embodiment, the damaged part information in step S120 includes a damaged part name, a damaged state, and a damaged extent. The damaged part names include, but are not limited to: sheet metal, glass, tires, etc.; the impairment information includes, but is not limited to: scoring, scraping, sagging, creasing, tearing, missing, cracking, and the like; the extent of damage includes, but is not limited to: mild injury, moderate injury, severe injury, etc.
In an embodiment, after a traffic accident, a user can take images of the accident scene by using a mobile phone, for example, take images of damaged vehicles at different positions and different angles to obtain images of the damaged vehicles, and then send the images of the damaged vehicles to a background server for damage assessment.
The background server stores a pre-trained integrated damage assessment model, and obtains damaged part types in a damaged vehicle image according to the damaged vehicle image uploaded by a user, wherein damaged part information comprises damaged part names, damaged states and damaged degrees, for example, the damaged part names of the vehicle are sheet metal parts, damaged information is scraping, and the damaged degrees are moderate damage.
And obtaining a damage position corresponding to the damaged vehicle in the corresponding damaged vehicle image through the vehicle part segmentation model, for example, the damage position is: left front door sheet metal component.
And then obtaining damage information of the damaged vehicle according to the damage position and the damage part information, wherein the damage information comprises the following steps: lesion part information and lesion location. The damage information of the damaged vehicle in the example described above is: the left front door sheet metal part is scratched to a moderate damage degree.
Additionally, in an embodiment, the integrated impairment model comprises:
The shared backbone neural network is integrated by a damage detection model corresponding to at least one component type and is used for preprocessing a damaged vehicle image to obtain first output data; and the damage detection classification layer is used for processing the first output data to obtain damaged part information.
In this embodiment, the integrated damage assessment model is composed of an integrated shared backbone neural network and a plurality of damage detection classification layers (corresponding to different component types), each damage detection classification layer processes a damage data set of one component type correspondingly, the shared backbone neural network is obtained by integrating the damage detection models corresponding to at least one component type, then the damage detection classification layers corresponding to at least one component type are connected, and damage component information corresponding to the component type is output.
It can be said that the damage detection of each different component shares the shared backbone neural network in front, except for the damage detection classification layer of the last layer, and each damage data set predicts its damage category through the damage detection classification layer of its corresponding end after passing through the shared backbone neural network. The method is characterized in that a plurality of damage detection models are integrated into one model in a mode of sharing a backbone neural network, different from the existing damage detection models corresponding to a plurality of parts, the shared backbone neural network does not distinguish part types, different damage detection is carried out by using one network structure, a plurality of damage detection classification layers correspond to a plurality of damage data sets, and therefore damage results of all the parts are output.
The differences between the structural block diagram of the vehicle detection model in the related art and the structural block diagram of the integrated impairment model in the present embodiment are described in comparison.
Referring to fig. 3, which is a block diagram of a vehicle detection model in the related art, since there are many vehicle components, three types of sheet metal parts, glass, and tires are selected for illustration in this embodiment, and it is understood that the present invention is not limited to these three types.
The structural block diagram of the vehicle detection model comprises three functional units, namely:
a first unit: a lesion dataset comprising: sheet metal part damage detection data set, glass damage detection data set and tire damage detection data set. These damage data sets are all labels for damage of a specific component, and do not contain damage label information of other components, and training samples of the damage data sets may include: a lesion image, a lesion judgment label, and the like.
A second unit: a single dataset detection model comprising: sheet metal component damage detection model, glass damage detection model and tire damage detection model, the damage detection model includes: injury detection subject and injury detection classifier. Different detection models are obtained corresponding to different data sets.
A third unit: a loss function comprising: sheet metal part damage loss function, glass damage loss function and tire damage loss function, loss function and damage detection model one-to-one are used for updating the parameter of damage detection model.
During training, the damage data set of the first unit is respectively input into the single data set detection model of the second unit to obtain corresponding damage detection results, and then the parameters of the damage detection model are updated by using the loss function of the third unit. For example, the sheet metal part damage detection data set is input into the sheet metal part damage detection model to obtain a sheet metal part damage detection result, and then parameters of the sheet metal part damage detection model are updated by using a sheet metal part damage loss function.
Referring to fig. 3, in general, each of the related art vehicle detection models operates independently without interfering with each other. There are thus the following problems: firstly, in the model training development stage, a plurality of damage data sets are required to be prepared and marked, a plurality of damage detection models are respectively trained and optimized, the process of separately training different damage detection models is complex, the efficiency is low, more manpower and material resource investment is required, and meanwhile, the model obtained by training is poor in robustness due to the fact that the repeated utilization rate of data is low. And secondly, in the model deployment online stage, a plurality of models need to consume more computing resources, and meanwhile, the scheduling difficulty and the cost are increased. A block diagram of an integrated impairment model according to an embodiment of the present application is described below with reference to fig. 4.
Referring to fig. 4, a structural block diagram of an integrated impairment model according to one embodiment of the present application is provided, including the following functional units.
Since the vehicle has more components, three types of sheet metal parts, glass and tires are selected for illustration in this embodiment, and it is understood that the present invention is not limited to these three types.
The structural block diagram of the vehicle detection model comprises four units, which are respectively:
a first unit: a lesion dataset comprising: sheet metal part damage detection data sets, glass damage detection data sets, tire damage detection data sets and the like. These damage data sets are all labels for damage of a specific component, and do not contain damage label information of other components, and training samples of the damage data sets may include: lesion images, lesion judgment labels, etc., for example:
sample 1: the damage image is a glass image, and the damage judgment label is as follows: [ glass, moderate damage, breakage ];
sample 2: the damage image is a sheet metal part image, and the damage judgment label is as follows: [ sheet metal part, light damage, scratch ].
A second unit: the shared backbone neural network is integrated by a damage detection model corresponding to at least one component type, for example, a sheet metal part damage detection model, a glass damage detection model and a tire damage detection model.
A third unit: different part types correspond to different damage detection classification layers, such as sheet metal part damage detection classification layers, glass damage detection classification layers, tire damage detection classification layers and the like. The second unit and the third unit constitute an integrated impairment model of the present embodiment.
A fourth unit: a loss function comprising: sheet metal part damage loss function, glass damage loss function, tire damage loss function and the like, wherein the loss function corresponds to the damage detection model one by one and is used for updating parameters of the damage detection model. Since the damage categories of different damage data sets are specific to a specific component, the phenomenon of superposition of the damage categories of different damage data sets does not exist, and the damage functions do not need to be combined into one for training.
During training, after the injury data set of the first unit is input into the shared backbone neural network of the second unit to be output, the corresponding injury detection classification layer is selected according to the injury component of the output. Because the damage data sets are all labels for the damage of a specific component and do not contain damage label information of other components, damage categories cannot be combined simply, and one detection model is trained to detect the damage of all the components, one integrated shared backbone neural network is trained under the condition that target category spaces of a plurality of damage data sets are not combined, namely a plurality of damage detection models for the specific damage data sets are trained in parallel in the shared backbone network.
For example, the sheet metal part damage detection data set is input into the shared backbone neural network and then output to the sheet metal part damage detection classification layer, and then parameters of the sheet metal part damage detection model are updated by using a sheet metal part damage loss function.
Additionally, in an embodiment, the shared backbone neural network is a depth residual neural network (Deep Residual Network).
For the traditional deep learning network application, the deeper the network, the more the layer number, the more can be learned, and of course the slower the convergence speed and the longer the training time. However, the deeper the depth is, the lower the learning rate is, even in some scenes, the deeper the network layer number is, the lower the accuracy is, and the gradient vanishing and gradient explosion phenomena are easy to occur. This phenomenon is not due to overfitting, which is a situation where the model is trained too well in the training set, but does not perform well in the new data. In general, the deep learning network training accuracy error and the test error become larger after the layer number is increased, which means that the deep network becomes difficult to train when the layer number of the network becomes deep.
The depth residual neural network introduces the design of residual blocks, and solves the problems that the learning rate is low and the accuracy cannot be effectively improved due to the deepening of the network depth. The principle of residual blocks is to skip the data output of the previous layers directly over multiple layers and introduce it into the input part of the following data layer. In short, similar to jump connection, the front data with clearer definition and the rear data with lossy compression are used as the input of the rear network data, so that the network can learn richer contents, and the shared backbone neural network of the embodiment adopts a depth residual neural network.
In one embodiment, referring to fig. 5, step S120 includes, but is not limited to, the following steps:
step S121, performing residual feature vector extraction processing on the damaged vehicle image based on each residual block sequentially connected in the depth residual neural network, to obtain first output data.
In an embodiment, any residual block includes an identity map and at least two convolution layers, and the identity map of any residual block points from the input end of any residual block to the output end of any residual block, that is, the identity map is the jump connection mentioned above.
Step S122, inputting the first output data into a damage detection classification layer corresponding to the component type to obtain damaged component information.
In one embodiment, the shared backbone neural network comprises one of: res-Net50 network, res-Net101 network, res-Net110 network, or Res-Net152 network. Since the related art has a more detailed description about the network structure, the detailed description is omitted herein.
The following describes the training process of the integrated impairment model of the present embodiment.
In an embodiment, referring to fig. 6, a flowchart of training an integrated damage assessment model in a vehicle damage detection method according to an embodiment of the present application includes, but is not limited to, step S610 and step S640.
In step S610, a damage data set corresponding to at least one component type is obtained as a training data set, where the training data set includes a corresponding damage judgment tag.
In step S620, the training data set is input into the shared backbone neural network to obtain feature data, which is obtained by the residual feature vector extraction process, that is, similar to the first output data.
Step S630, inputting the characteristic data into a damage detection classification layer corresponding to the component type to obtain a damage component information detection result.
Step S640, training to obtain an integrated damage assessment model according to detection errors between the damage part information detection result and the damage judgment label.
In one embodiment, the damage detection model corresponding to each component type includes a corresponding loss function, and step S640 is specifically described as: and adjusting parameters in the integrated damage assessment model according to the detection error until the loss function meets the convergence condition to obtain the integrated damage assessment model, namely updating parameters of the damage detection model according to the loss function. In this embodiment, the convergence condition may be: the loss function is minimized, i.e. the parameters of each damage detection model are optimized in such a way that each loss function is minimized.
In the embodiment, all types of damage labeling data are used when the integrated damage assessment model is trained, so that the robustness of the integrated damage assessment model is obviously improved, and the possibility of overfitting is reduced. Meanwhile, compared with a plurality of damage detection models, all damage detection results can be obtained by one integrated model through one forward reasoning, the occupied computing resources are greatly reduced, and the consumed computing time is greatly reduced.
In addition, in an embodiment, for the problem of unbalance of each damage sample in the plurality of damage data sets, when the integrated damage assessment model is trained, a uniform sampling strategy and/or an inter-class balance sampling strategy is adopted to sample the training data set.
1) Uniform sampling strategy: the damage data sets corresponding to each component type are uniformly sampled by adopting a uniform sampling strategy to obtain training data sets, so that the number of samples among the loss data sets corresponding to different component types is balanced, and the overall performance of the integrated damage assessment model on each damage data set is improved.
2) Inter-class balanced sampling strategy: when the sample number difference between the damage data sets corresponding to different component types is large, the inter-class balance sampling strategy is adopted to carry out oversampling on the loss data set corresponding to the component type with small sample number so as to obtain a training data set. For example, if the number of samples of the damage data set of a certain component class is obviously smaller than that of other damage data sets, the information amount which can be provided by the class is also less, and the strategy is adopted for balanced sampling, so that the consistency of the training data amount among the classes is ensured as much as possible.
The strategy is used for relieving the problem of sample unbalance among different categories in the damage data set and improving the performance of the corresponding damage data set. The main purpose is to ensure that the probability of each class (sample of different component types) occurring is the same in each batch (batch) of training samples as much as possible, and to avoid a constant picture input order.
In this embodiment two lists are used to iteratively sample the samples for each batch. In each iteration, a category X (e.g., glass) is first sampled in the part category list, then a picture is sampled in the image list of category X (e.g., glass damage dataset), and when the image list of category X is traversed, the image list order is re-disordered and the traversal is started from scratch. For example, by "oversampling" the low sample count lesion data set, the sample count is increased, and it is ensured that the sample count of the lesion data set for the component class is uniform in the lot. The parts class list is handled the same, ensuring that the class of parts class in each lot is also balanced. The class distribution imbalance problem in the training samples is solved by the above class-to-class balanced sampling strategy.
Additionally, in an embodiment, the vehicle component segmentation model is pre-trained from a neural network model. In the forward reasoning stage, an image is input into an integrated damage assessment model, the model can output damage component information aiming at a plurality of damage data sets at the same time, and the specific position of the damage component information needs to be determined in the next step.
The details of these two vehicle component segmentation models are described below.
1) The vehicle part segmentation model comprises a part segmentation network and a global feature extraction network, a global feature map of an input damaged vehicle image is acquired, and a plurality of local features corresponding to the global feature map are acquired by utilizing the global feature extraction network, so that vehicle segmentation part information can be obtained by utilizing the part segmentation network according to the local features, and parameters of the model are updated according to a loss function, so that the vehicle part segmentation model can output a damage position corresponding to the damaged vehicle image.
2) Building a corresponding vehicle part segmentation model according to the semantic segmentation thought, wherein the training process comprises the following steps: labeling categories in the damaged vehicle image sample to generate a training set, wherein each category contains corresponding weights, training a semantic segmentation model to be trained by adopting a corresponding loss function, obtaining a loss value, adjusting model parameters of the semantic segmentation model to be trained according to the loss value, and generating a vehicle part segmentation model which is constructed according to a semantic segmentation thought.
Having described the specific structure of the integrated damage assessment model and the vehicle component segmentation model, referring to fig. 7, a flow chart of a vehicle damage detection method according to an embodiment of the present application is provided, and the following steps are visible from the figure:
in step S710, a damaged vehicle image is acquired.
Step S720, inputting the damaged vehicle image into the integrated damage assessment model and the vehicle component segmentation model simultaneously, wherein the integrated damage assessment model comprises a shared backbone neural network and a plurality of damage detection classification layers.
Step S730, obtaining the damaged part information output by the integrated damage assessment model and the damaged position output by the vehicle part segmentation model respectively.
And step S740, positioning the damaged information of the damaged vehicle from the damaged part information according to the damaged position, and only retaining the matched damaged information under the target part. And combining the vehicle part segmentation model, and positioning the part by the detection result of the integrated damage assessment model, so as to obtain damaged part information and the damaged position thereof, and filtering damage false detection of a background area.
For example, according to an input damaged vehicle image, damage component information output by the integrated damage model includes: for example, the names of damaged parts of the vehicle are sheet metal parts and glass, damage information is scraping and fragmentation, and the damage degree is moderate damage.
Obtaining a damaged position corresponding to the damaged vehicle in the corresponding damaged vehicle image through the vehicle part segmentation model, for example, the damaged position is: the right side of the front windshield.
Then combining the damaged position and the damaged part information to obtain damaged information of the damaged vehicle, wherein the damaged information comprises the following steps: and (3) the right side of the front windshield is cracked at moderate damage degree, namely, damage detection results of all glasses in a glass dividing area are reserved, and possible damage detection results of tires or sheet metal parts are removed.
The embodiment of the application provides a vehicle damage detection method, which comprises the steps of obtaining damaged vehicle images, inputting the damaged vehicle images into a trained integrated damage assessment model to obtain damage part information, obtaining damage positions corresponding to the damaged vehicle images according to a vehicle part segmentation model, and obtaining damage information of a vehicle according to the damage positions and the damage part information. Wherein the integrated impairment model comprises: the shared backbone neural network is integrated by the damage detection model corresponding to the at least one component type, the integrated mode is convenient for model optimization training, and the labeling data of all the damage types can be used during training, so that the robustness of the model is effectively improved, and the possibility of overfitting is reduced. Compared with a plurality of damage detection models separated in the related art, the integrated damage assessment model can obtain all damage component information through one-time forward reasoning, the occupied calculation resources are greatly reduced, the consumed calculation time is greatly reduced, the detection efficiency is effectively improved, and the labor cost in the vehicle damage assessment process is reduced. The damaged positions corresponding to the damaged vehicle images are obtained through the vehicle part segmentation model and combined, and then the damaged information of the vehicle containing the positioning information can be obtained; on the other hand, since the damage position of the member is included, damage false detection in the background area can be filtered, and the detection accuracy of the damage detection of the vehicle can be further improved.
In addition, an embodiment of the present application further provides a vehicle damage detection device, referring to fig. 8, the device includes:
an acquisition image module 810 for acquiring an image of a damaged vehicle;
the damaged component information determining module 820 is configured to input a damaged vehicle image into a trained integrated damage assessment model, to obtain damaged component information, where the integrated damage assessment model includes:
the shared backbone neural network is integrated by a damage detection model corresponding to at least one component type and is used for preprocessing a damaged vehicle image to obtain first output data;
the damage detection classification layer is used for processing the first output data to obtain damaged part information;
the image segmentation module 830 is configured to input a damaged vehicle image into the vehicle component segmentation model, so as to obtain a damaged position corresponding to the damaged vehicle image;
the damage information synthesis module 840 is configured to obtain damage information of the vehicle from the damage component information according to the location of the damage.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The vehicle damage detection device in the present embodiment may execute the vehicle damage detection method in the embodiment shown in fig. 2. That is, the vehicle damage detection device in the present embodiment and the vehicle damage detection method in the embodiment shown in fig. 2 are both of the same inventive concept, and therefore these embodiments have the same implementation principle and technical effect, and are not described in detail here.
In addition, an embodiment of the present application further provides a computer device, where the computer device includes: memory, a processor, and a computer program stored on the memory and executable on the processor.
The processor and the memory may be connected by a bus or other means.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The non-transitory software program and instructions required to implement the vehicle damage detection method of the above-described embodiments are stored in the memory, and when executed by the processor, the vehicle damage detection method in the above-described embodiments is performed, for example, the method steps S110 and S140 in fig. 2, the method steps S510 to 530 in fig. 5, the method steps S610 to S640 in fig. 6, and the like described above are performed.
Furthermore, an embodiment of the present application further provides a computer-readable storage medium storing computer-executable instructions that are executed by a processor or a controller, for example, by one of the processors in the embodiment of the computer device, which may cause the processor to perform the API-based native capability development method in the embodiment described above, for example, performing the method steps S110 and S140 in fig. 2, the method steps S510 to 530 in fig. 5, the method steps S610 to S640 in fig. 6, and so on described above.
As another example, execution by one of the processors in the above-described computer device embodiments may cause the above-described processor to execute the vehicle damage detection method in the above-described embodiment, for example, to execute the method steps S110 and S140 in fig. 2, the method steps S510 to 530 in fig. 5, the method steps S610 to S640 in fig. 6, and the like described above.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
While the preferred embodiments of the present application have been described in detail, the embodiments are not limited to the above-described embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the embodiments, and these equivalent modifications or substitutions are included in the scope of the embodiments of the present application as defined in the appended claims.

Claims (6)

1. A vehicle damage detection method, characterized by comprising:
acquiring a damaged vehicle image;
inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damage component information, wherein the integrated damage assessment model comprises:
the shared backbone neural network is integrated by a damage detection model corresponding to at least one component type, and is used for preprocessing the damaged vehicle image to obtain first output data;
the damage detection classification layer is used for processing the first output data to obtain damaged part information;
inputting the damaged vehicle image into a vehicle part segmentation model to obtain a damaged position corresponding to the damaged vehicle image;
Positioning and obtaining damage information of the vehicle from the damage part information according to the damage position;
the shared backbone neural network is integrated by a sheet metal part damage detection model, a glass damage detection model and a tire damage detection model;
the damage detection classification layer includes: sheet metal part damage detection classification layer, glass damage detection classification layer or tire damage detection classification layer;
the integrated loss assessment model is obtained through training in the following training process:
acquiring a damage data set corresponding to at least one component type as a training data set, wherein the training data set comprises a corresponding damage judgment label;
inputting the training data set into the shared backbone neural network to obtain characteristic data;
inputting the characteristic data into a damage detection classification layer corresponding to the part type to obtain a damage part information detection result;
training to obtain the integrated damage assessment model according to the detection error between the damage part information detection result and the damage judgment label;
each damage detection model corresponding to the component type comprises a corresponding loss function, and training to obtain the integrated damage assessment model according to the detection error between the damage component information detection result and the damage judgment label further comprises:
Adjusting parameters in the integrated loss assessment model according to the detection error until the loss function meets a convergence condition, so as to obtain the integrated loss assessment model;
the shared backbone neural network is a depth residual neural network, the depth residual neural network comprising: res-Net50 network, res-Net101 network, res-Net110 network, or Res-Net152 network;
inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged part information, wherein the method comprises the following steps of:
performing residual feature vector extraction processing on the damaged vehicle image based on each residual block which is sequentially connected in the depth residual neural network to obtain first output data; wherein, any residual block comprises an identity mapping and at least two convolution layers, and the identity mapping of any residual block points to the output end of any residual block from the input end of any residual block;
inputting the first output data to a damage detection classification layer corresponding to the part type to obtain damaged part information;
the component types include: sheet metal parts, glass and tires;
the damaged part information includes: the name of the damaged part, the damage status and the damage degree;
The damaged part name includes: one or more of a sheet metal part, glass or tire;
the damage information includes: one or more of scoring, scraping, sagging, creasing, tearing, missing, or cracking;
the damage degree comprises: one or more of mild injury, moderate injury, or severe injury.
2. The method for detecting damage to a vehicle according to claim 1, wherein the acquiring the damage data set corresponding to the at least one component type as the training data set includes:
and uniformly sampling the damage data set corresponding to each component type by adopting a uniform sampling strategy to obtain a training data set so as to balance the sample quantity among the loss data sets corresponding to different component types.
3. The method for detecting damage to a vehicle according to claim 1, wherein the acquiring the damage data set corresponding to the at least one component type as the training data set includes:
when the sample number difference between the damage data sets corresponding to different component types is large, the inter-class balance sampling strategy is adopted to carry out oversampling on the loss data set corresponding to the component type with small sample number so as to obtain a training data set.
4. A vehicle damage detection device, characterized by comprising:
the image acquisition module is used for acquiring damaged vehicle images;
the damaged part information determining module is used for inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged part information, and the integrated damage assessment model comprises:
the shared backbone neural network is integrated by a damage detection model corresponding to at least one component type, and is used for preprocessing the damaged vehicle image to obtain first output data;
the damage detection classification layer is used for processing the first output data to obtain damaged part information;
the image segmentation module is used for inputting the damaged vehicle image into a vehicle part segmentation model to obtain a damage position corresponding to the damaged vehicle image;
the damage information synthesis module is used for obtaining damage information of the vehicle in a positioning mode from the damage part information according to the damage position;
the shared backbone neural network is integrated by a sheet metal part damage detection model, a glass damage detection model and a tire damage detection model;
The damage detection classification layer includes: sheet metal part damage detection classification layer, glass damage detection classification layer or tire damage detection classification layer;
the integrated loss assessment model is obtained through training in the following training process:
acquiring a damage data set corresponding to at least one component type as a training data set, wherein the training data set comprises a corresponding damage judgment label;
inputting the training data set into the shared backbone neural network to obtain characteristic data;
inputting the characteristic data into a damage detection classification layer corresponding to the part type to obtain a damage part information detection result;
training to obtain the integrated damage assessment model according to the detection error between the damage part information detection result and the damage judgment label;
each damage detection model corresponding to the component type comprises a corresponding loss function, and training to obtain the integrated damage assessment model according to the detection error between the damage component information detection result and the damage judgment label further comprises:
adjusting parameters in the integrated loss assessment model according to the detection error until the loss function meets a convergence condition, so as to obtain the integrated loss assessment model;
The shared backbone neural network is a depth residual neural network, the depth residual neural network comprising: res-Net50 network, res-Net101 network, res-Net110 network, or Res-Net152 network;
inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged part information, wherein the method comprises the following steps of:
performing residual feature vector extraction processing on the damaged vehicle image based on each residual block which is sequentially connected in the depth residual neural network to obtain first output data; wherein, any residual block comprises an identity mapping and at least two convolution layers, and the identity mapping of any residual block points to the output end of any residual block from the input end of any residual block;
inputting the first output data to a damage detection classification layer corresponding to the part type to obtain damaged part information;
the component types include: sheet metal parts, glass and tires;
the damaged part information includes: the name of the damaged part, the damage status and the damage degree;
the damaged part name includes: one or more of a sheet metal part, glass or tire;
the damage information includes: one or more of scoring, scraping, sagging, creasing, tearing, missing, or cracking;
The damage degree comprises: one or more of mild injury, moderate injury, or severe injury.
5. A computer device comprising a processor and a memory;
the memory is used for storing programs;
the processor is configured to execute the vehicle damage detection method according to any one of claims 1 to 3 according to the program.
6. A computer-readable storage medium storing computer-executable instructions for performing the vehicle damage detection method according to any one of claims 1 to 3.
CN202111080117.1A 2021-09-15 2021-09-15 Vehicle damage detection method, device, equipment and storage medium Active CN113780435B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111080117.1A CN113780435B (en) 2021-09-15 2021-09-15 Vehicle damage detection method, device, equipment and storage medium
PCT/CN2022/071075 WO2023040142A1 (en) 2021-09-15 2022-01-10 Vehicle damage detection method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111080117.1A CN113780435B (en) 2021-09-15 2021-09-15 Vehicle damage detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113780435A CN113780435A (en) 2021-12-10
CN113780435B true CN113780435B (en) 2024-04-16

Family

ID=78844155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111080117.1A Active CN113780435B (en) 2021-09-15 2021-09-15 Vehicle damage detection method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113780435B (en)
WO (1) WO2023040142A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780435B (en) * 2021-09-15 2024-04-16 平安科技(深圳)有限公司 Vehicle damage detection method, device, equipment and storage medium
CN114723945A (en) * 2022-04-07 2022-07-08 平安科技(深圳)有限公司 Vehicle damage detection method and device, electronic equipment and storage medium
CN116883444B (en) * 2023-08-02 2024-01-12 武汉理工大学 Automobile damage detection method based on machine vision and image scanning
CN116910495B (en) * 2023-09-13 2024-01-26 江西五十铃汽车有限公司 Method and system for detecting off-line of automobile, readable storage medium and automobile

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359676A (en) * 2018-10-08 2019-02-19 百度在线网络技术(北京)有限公司 Method and apparatus for generating vehicle damage information
CN109657716A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance damnification recognition method based on deep learning
CN109657596A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance component identification method based on deep learning
CN110443814A (en) * 2019-07-30 2019-11-12 北京百度网讯科技有限公司 Damage identification method, device, equipment and the storage medium of vehicle
CN110728236A (en) * 2019-10-12 2020-01-24 创新奇智(重庆)科技有限公司 Vehicle loss assessment method and special equipment thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11676365B2 (en) * 2019-12-16 2023-06-13 Accenture Global Solutions Limited Explainable artificial intelligence (AI) based image analytic, automatic damage detection and estimation system
US11999364B2 (en) * 2020-12-23 2024-06-04 Intel Corporation Systems and methods for intrusion detection in vehicle systems
CN113780435B (en) * 2021-09-15 2024-04-16 平安科技(深圳)有限公司 Vehicle damage detection method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359676A (en) * 2018-10-08 2019-02-19 百度在线网络技术(北京)有限公司 Method and apparatus for generating vehicle damage information
CN109657716A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance damnification recognition method based on deep learning
CN109657596A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance component identification method based on deep learning
CN110443814A (en) * 2019-07-30 2019-11-12 北京百度网讯科技有限公司 Damage identification method, device, equipment and the storage medium of vehicle
CN110728236A (en) * 2019-10-12 2020-01-24 创新奇智(重庆)科技有限公司 Vehicle loss assessment method and special equipment thereof

Also Published As

Publication number Publication date
WO2023040142A1 (en) 2023-03-23
CN113780435A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN113780435B (en) Vehicle damage detection method, device, equipment and storage medium
CN109902732B (en) Automatic vehicle classification method and related device
CN109086668B (en) Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network
CN108319907A (en) A kind of vehicle identification method, device and storage medium
CN113221911B (en) Vehicle weight identification method and system based on dual attention mechanism
CN108171175B (en) Deep learning sample enhancement system and operation method thereof
EP3843036A1 (en) Sample labeling method and device, and damage category identification method and device
CN110991506A (en) Vehicle brand identification method, device, equipment and storage medium
CN116580271A (en) Evaluation method, device, equipment and storage medium for perception fusion algorithm
CN112633149A (en) Domain-adaptive foggy-day image target detection method and device
CN111435446A (en) License plate identification method and device based on L eNet
CN113052159A (en) Image identification method, device, equipment and computer storage medium
CN112613375A (en) Tire damage detection and identification method and device
CN111797000A (en) Scene complexity evaluation method based on gradient lifting decision tree model
CN110659601A (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
CN116964588A (en) Target detection method, target detection model training method and device
CN117152513A (en) Vehicle boundary positioning method for night scene
CN117726884B (en) Training method of object class identification model, object class identification method and device
CN115272222A (en) Method, device and equipment for processing road detection information and storage medium
CN111444911A (en) Training method and device of license plate recognition model and license plate recognition method and device
CN116861262B (en) Perception model training method and device, electronic equipment and storage medium
CN113902793A (en) End-to-end building height prediction method and system based on single vision remote sensing image and electronic equipment
CN113313110A (en) License plate type recognition model construction and license plate type recognition method
CN115984723A (en) Road damage detection method, system, device, storage medium and computer equipment
CN110765900A (en) DSSD-based automatic illegal building detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant