CN113780435A - Vehicle damage detection method, device, equipment and storage medium - Google Patents

Vehicle damage detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113780435A
CN113780435A CN202111080117.1A CN202111080117A CN113780435A CN 113780435 A CN113780435 A CN 113780435A CN 202111080117 A CN202111080117 A CN 202111080117A CN 113780435 A CN113780435 A CN 113780435A
Authority
CN
China
Prior art keywords
damage
damaged
vehicle
information
damage detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111080117.1A
Other languages
Chinese (zh)
Other versions
CN113780435B (en
Inventor
方起明
刘莉红
刘玉宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111080117.1A priority Critical patent/CN113780435B/en
Publication of CN113780435A publication Critical patent/CN113780435A/en
Priority to PCT/CN2022/071075 priority patent/WO2023040142A1/en
Application granted granted Critical
Publication of CN113780435B publication Critical patent/CN113780435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Technology Law (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a vehicle damage detection method, a device, equipment and a storage medium, and relates to the technical field of artificial intelligence image processing, wherein the method comprises the following steps: obtaining a damaged vehicle image, inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged part information, obtaining a damaged position corresponding to the damaged vehicle image according to a vehicle part segmentation model, and obtaining the damaged information of the vehicle according to the damaged position and the damaged part information. Compared with a plurality of separated damage detection models in the related technology, the damage detection model has the advantages that the integrated damage assessment model is used for obtaining corresponding damage part information through one-time forward processing, the occupied calculation resources are greatly reduced, the consumed calculation time is greatly reduced, the detection efficiency is effectively improved, and the labor cost in the vehicle damage assessment process is reduced. And the damage position corresponding to the damaged vehicle image is obtained by combining the vehicle component segmentation models, so that the damage information of the vehicle containing the positioning information can be obtained.

Description

Vehicle damage detection method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a vehicle damage detection method, device, equipment and storage medium.
Background
Determining the location and extent of damage to a damaged vehicle is a very important task in a traffic accident. The traditional damage assessment method depends on manual judgment, is low in efficiency and is easy to bring errors due to personal factors of damage assessment personnel. In recent years, with the development of Artificial Intelligence (AI) technology, wherein AI is a theory, method, technology and application system for simulating, extending and expanding human Intelligence, sensing environment, acquiring knowledge and using knowledge to obtain optimal results using a digital computer or a machine controlled by a digital computer. Some of the existing mechanisms adopt computer vision (target detection and the like) based methods to carry out intelligent damage assessment on damaged vehicles, and the methods reduce the dependence of damage assessment work on human labor to a certain extent.
However, due to the reasons of complex and variable damage scenes, many vehicle parts, many damage forms, and the like, many damage detection models are often required to be developed, and for example, many different damage detection models for sheet metal parts, glass, tires, and the like are developed according to the material of the parts. Firstly, in a model training and developing stage, a plurality of damage data sets need to be prepared and marked, a plurality of damage detection models need to be trained and optimized respectively, the process is complicated, the efficiency is low, more manpower and material resource investment is needed, and meanwhile, the trained models can only be used for corresponding component types, so that the data reuse rate is low and the robustness is poor; and secondly, in the model deployment online stage, a plurality of models need to consume more computing resources, and meanwhile, the scheduling difficulty, the cost and the like are increased.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the application provides a vehicle damage detection method, device and equipment and a storage medium, which can effectively improve detection efficiency and detection precision and reduce labor cost in a vehicle damage assessment process.
In a first aspect, an embodiment of the present application provides a vehicle damage detection method, including:
acquiring a damaged vehicle image;
inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged component information, wherein the integrated damage assessment model comprises:
the shared trunk neural network is obtained by integrating damage detection models corresponding to at least one component type, and is used for preprocessing the damaged vehicle image to obtain first output data;
at least one damage detection classification layer corresponding to the component type, which is used for processing the first output data to obtain damaged component information;
inputting the damaged vehicle image into a vehicle component segmentation model to obtain a damage position corresponding to the damaged vehicle image;
and positioning the damaged part information according to the damaged position to obtain the damaged information of the vehicle.
In an optional implementation manner, the shared trunk neural network is a deep residual neural network, and the deep residual neural network includes: a Res-Net50 network, a Res-Net101 network, a Res-Net110 network, or a Res-Net152 network;
the step of inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged part information includes:
based on all the residual blocks sequentially connected in the deep residual neural network, performing residual characteristic vector extraction processing on the damaged vehicle image to obtain first output data; any one residual block comprises an identity mapping and at least two convolution layers, and the identity mapping of any one residual block is directed to the output end of any one residual block from the input end of any one residual block;
and inputting the first output data to a damage detection classification layer corresponding to the component type to obtain damaged component information.
In an alternative implementation, the integrated damage assessment model is trained through the following training process:
acquiring a damage data set corresponding to at least one component type as a training data set, wherein the training data set comprises corresponding damage judgment labels;
inputting the training data set into the shared trunk neural network to obtain characteristic data;
inputting the characteristic data into a damage detection classification layer corresponding to the component type to obtain a damaged component information detection result;
and training to obtain the integrated damage assessment model according to the detection error between the damaged part information detection result and the damage judgment label.
In an optional implementation manner, the training to obtain the integrated damage assessment model according to the detection error between the damaged component information detection result and the damage judgment tag further includes:
and adjusting parameters in the integrated damage assessment model according to the detection error until the loss function meets a convergence condition to obtain the integrated damage assessment model.
In an optional implementation manner, the acquiring a damage data set corresponding to at least one component type as a training data set includes:
and uniformly sampling the damage data sets corresponding to each component type by adopting a uniform sampling strategy to obtain a training data set so as to balance the number of samples among the loss data sets corresponding to different component types.
In an optional implementation manner, the acquiring a damage data set corresponding to at least one component type as a training data set includes:
and when the number of samples between the damage data sets corresponding to different component types is large in difference, oversampling is carried out on the loss data set corresponding to the component type with the small number of samples by adopting an inter-class balance sampling strategy to obtain a training data set.
In an alternative implementation, the component types include: sheet metal parts, glass and tires;
the shared trunk neural network is obtained by integrating a sheet metal part damage detection model, a glass damage detection model and a tire damage detection model;
the damage detection classification layer includes: a sheet metal damage detection classification layer, a glass damage detection classification layer or a tire damage detection classification layer;
the damaged part information includes: damaged part name, damaged state, and degree of damage;
the damaged part name includes: one or more of sheet metal parts, glass or tires;
the damage information includes: one or more of scratches, dents, wrinkles, tears, deletions, or ruptures;
the degree of damage includes: one or more of mild injury, moderate injury, or severe injury.
In a second aspect, an embodiment of the present application provides a vehicle damage detection apparatus, including:
the image acquisition module is used for acquiring an image of the damaged vehicle;
a damaged component information determination module, configured to input the damaged vehicle image into a trained integrated damage assessment model to obtain damaged component information, where the integrated damage assessment model includes:
the shared trunk neural network is obtained by integrating damage detection models corresponding to at least one component type, and is used for preprocessing the damaged vehicle image to obtain first output data;
at least one damage detection classification layer corresponding to the component type, which is used for processing the first output data to obtain damaged component information;
the image segmentation module is used for inputting the damaged vehicle image into a vehicle component segmentation model to obtain a damage position corresponding to the damaged vehicle image;
and the damage information synthesis module is used for positioning and obtaining the damage information of the vehicle from the damaged part information according to the damaged position.
In a third aspect, a computer device includes a processor and a memory;
the memory is used for storing programs;
the processor is configured to execute the vehicle damage detection method according to any one of the first aspect according to the program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium storing computer-executable instructions for executing the vehicle damage detection method according to any one of the first aspect.
Compared with the related art, according to the vehicle damage detection method provided by the embodiment of the application, the damaged vehicle image is acquired, the damaged vehicle image is input into the trained integrated damage assessment model to obtain the damaged part information, the damaged position corresponding to the damaged vehicle image is obtained according to the vehicle part segmentation model, and the damage information of the vehicle is obtained according to the damaged position and the damaged part information. Wherein the integrated damage assessment model comprises: the shared trunk neural network is obtained by integrating the damage detection models corresponding to the at least one component type, model optimization training is facilitated in an integrated mode, and marking data of all damage types can be used during training, so that the robustness of the models is effectively improved, and the possibility of overfitting is reduced. Compared with a plurality of separated damage detection models in the related technology, the damage detection model has the advantages that the integrated damage assessment model is used for obtaining corresponding damage part information through one-time forward processing, the occupied calculation resources are greatly reduced, the consumed calculation time is greatly reduced, the detection efficiency is effectively improved, and the labor cost in the vehicle damage assessment process is reduced. In addition, the damage information of the vehicle containing the positioning information can be obtained after the damage positions corresponding to the damaged vehicle images obtained by the vehicle part segmentation model are combined; on the other hand, since the damage information of the vehicle includes the damage position of the component, the embodiment of the application can filter the damage false detection of the background area, and further improve the detection accuracy of the vehicle damage detection.
It is to be understood that the advantageous effects of the second aspect to the fourth aspect compared to the related art are the same as the advantageous effects of the first aspect compared to the related art, and reference may be made to the related description of the first aspect, which is not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments or the related technical descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a schematic diagram of an exemplary system architecture provided by one embodiment of the present application;
FIG. 2 is a flow chart of a vehicle damage detection method provided by an embodiment of the present application;
FIG. 3 is a block diagram showing a structure of a vehicle detection model in the related art;
FIG. 4 is a block diagram of an integrated damage assessment model according to an embodiment of the present application;
FIG. 5 is a flow chart of a vehicle damage detection method provided by an embodiment of the present application;
FIG. 6 is a flowchart of training an integrated damage assessment model in a vehicle damage detection method according to an embodiment of the present application;
FIG. 7 is a block flow diagram of a vehicle damage detection method provided by an embodiment of the present application;
fig. 8 is a block diagram of a vehicle damage detection device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the embodiments of the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the embodiments of the present application with unnecessary detail.
It should be noted that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different from that in the flowcharts. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
It should also be appreciated that reference throughout the specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The automobile is a tool with high use frequency in land transportation, and is inevitably damaged under the influence of factors such as weather, road conditions, driver skills and the like, particularly in traffic accidents. The determination of the location and extent of damage to a damaged vehicle is therefore a very important task, since it affects not only the determination of subsequent vehicle repair options, but also the confirmation of the economic compensation amount of the accident-related party. The traditional damage assessment method depends on manual judgment, is low in efficiency and is easy to bring errors due to personal factors of damage assessment personnel. In recent years, with the development of artificial intelligence technology, some organizations have adopted computer vision (object detection and the like) based methods to intelligently determine damage of damaged vehicles, and these methods reduce the dependence of damage determination work on human labor to some extent.
Generally, in the design and development process of such an intelligent damage assessment system for a vehicle, due to the reasons of complex and variable damage assessment scenes, many vehicle components, many damage forms, and the like, it is often necessary to develop many damage detection models, for example, damage detection models for sheet metal parts, glass, tires, and the like, according to the material of the components. Firstly, in a model training and developing stage, a plurality of damage data sets need to be marked and prepared, and a plurality of damage detection models need to be trained and optimized respectively, the process of separately training different damage detection models is complex, the efficiency is low, more manpower and material resources are needed, and meanwhile, the model obtained by training is poor in robustness due to small data reuse rate. And secondly, in the model deployment online stage, a plurality of models need to consume more computing resources, and meanwhile, the scheduling difficulty, the cost and the like are increased.
The embodiment of the application is to improve the defects, and provides a vehicle damage detection method, wherein damaged vehicle images are obtained, the damaged vehicle images are input into a trained integrated damage assessment model to obtain damaged part information, a damaged position corresponding to the damaged vehicle images is obtained according to a vehicle part segmentation model, and damage information of a vehicle is obtained according to the damaged position and the damaged part information. Wherein the integrated damage assessment model comprises: the shared trunk neural network is obtained by integrating the damage detection models corresponding to the at least one component type, model optimization training is facilitated in an integrated mode, and marking data of all damage types can be used during training, so that the robustness of the models is effectively improved, and the possibility of overfitting is reduced. Compared with a plurality of separated damage detection models in the related technology, the integrated damage assessment model is used for obtaining all damage component information through one-time forward reasoning, the occupied computing resources are greatly reduced, the consumed computing time is greatly reduced, the detection efficiency is effectively improved, and the labor cost in the vehicle damage assessment process is reduced. After the damage positions corresponding to the damaged vehicle images obtained by the vehicle part segmentation model are combined, the damage information of the vehicle containing the positioning information can be obtained; on the other hand, since the damage position of the component is included, the damage false detection of the background region can be filtered, and the detection accuracy of the vehicle damage detection can be further improved.
The embodiments of the present application will be further explained with reference to the drawings.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present invention can be applied.
As shown in fig. 1, the system architecture 100 may include a terminal device (e.g., one or more of the desktop computer 101, tablet computer 102, and portable computer 103 shown in fig. 1, but may be other terminal devices having a display screen, etc.), a network 104, and a server 105. The network 104 serves as a medium for providing communication links between terminal devices and the server 105. Network 104 may include various connection types, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
For example, the server 105 may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, or a server cluster composed of a plurality of servers.
In an embodiment of the present invention, a user may upload damaged vehicle images, which may include damaged vehicle images captured at different angles and different distances for different parts, or a single photo, to the server 105 by using the terminal device 101 (which may also be the terminal device 102 or 103). After obtaining the damaged vehicle images, the server 105 inputs the damaged vehicle images into the trained integrated damage assessment model to obtain damaged component information, obtains damaged positions corresponding to the damaged vehicle images according to the vehicle component segmentation model, and then obtains the damaged information of the vehicle according to the damaged positions and the damaged component information. The detection efficiency can be effectively improved, and the labor cost in the vehicle damage assessment process is reduced. And moreover, the damage information of the vehicle containing the positioning information can be obtained, and meanwhile, the damage position of the component is contained, so that the damage false detection of the background area can be filtered, and the detection precision of the vehicle damage detection is further improved.
It should be noted that the vehicle damage detection method provided by the embodiment of the present invention is generally executed by the server 105, and accordingly, the vehicle damage detection apparatus is generally disposed in the server 105. However, in other embodiments of the present invention, the terminal device may also have a similar function as the server, so as to execute the image processing scheme to be diagnosed provided by the embodiments of the present invention.
The system architecture and the application scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and it is known by those skilled in the art that the technical solution provided in the embodiment of the present application is also applicable to similar technical problems with the evolution of the system architecture and the appearance of new application scenarios. Those skilled in the art will appreciate that the system architecture shown in FIG. 1 is not intended to be limiting of embodiments of the present application and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Based on the system architecture, various embodiments of the vehicle damage detection method of the embodiment of the application are provided.
As shown in fig. 2, fig. 2 is a flowchart of a vehicle damage detection method according to an embodiment of the present application, including but not limited to step S110 and step S140.
Step S110, a damaged vehicle image is acquired.
And step S120, inputting the damaged vehicle image into the trained integrated damage assessment model to obtain damaged part information.
Step S130, inputting the damaged vehicle image into the vehicle component segmentation model to obtain a damaged position corresponding to the damaged vehicle image.
And step S140, positioning and obtaining the damage information of the vehicle from the damaged part information according to the damage position.
It can be understood that, the step S120 and the step S130 do not have a sequence, and may be executed simultaneously, or the step S120 and the step S130 may be executed first, or the step S130 and the step S120 may be executed first, and the execution sequence is not limited in this embodiment.
In one embodiment, the damaged part information in step S120 includes a damaged part name, a damaged state, and a damaged degree. Damaged part names include, but are not limited to: sheet metal parts, glass, tires, and the like; impairment information includes, but is not limited to: scratches, depressions, wrinkles, tears, deletions, ruptures, and the like; the extent of damage includes, but is not limited to: mild injury, moderate injury, severe injury, etc.
In one embodiment, after a traffic accident occurs, a user can take a scene image of the accident with a mobile phone, for example, take images of damaged vehicles at different positions and different angles to obtain images of damaged vehicles, and then send the images of damaged vehicles to a background server for damage assessment.
The background server stores a pre-trained integrated damage assessment model, and obtains the types of damaged parts in the damaged vehicle images according to the damaged vehicle images uploaded by a user, wherein the damaged part information comprises names, damage states and damage degrees of the damaged parts, for example, the names of the damaged parts of the vehicle are sheet metal parts, the damage information is scraping, and the damage degrees are moderate damage.
And obtaining a damage position corresponding to the damaged vehicle in the damaged vehicle image through the vehicle component segmentation model, for example: left side front door sheet metal component.
And then obtaining damage information of the damaged vehicle according to the damage position and the damaged part information, wherein the damage information comprises: damaged part information and damaged location. The damage information of the damaged vehicle in the above example is, for example: scraping with moderate damage degree occurs to the left front door sheet metal part.
Additionally, in one embodiment, the integration impairment model comprises:
the shared trunk neural network is obtained by integrating damage detection models corresponding to at least one component type, and the shared trunk neural network is used for preprocessing the damaged vehicle image to obtain first output data; and the damage detection classification layer corresponding to at least one component type is used for processing the first output data to obtain damaged component information.
In this embodiment, the integrated damage assessment model is composed of an integrated shared backbone neural network and a plurality of damage detection classification layers (corresponding to different component types), each damage detection classification layer correspondingly processes a damage data set of one component type, the shared backbone neural network is obtained by integrating the damage detection model corresponding to at least one component type, and then the shared backbone neural network is connected to the damage detection classification layer corresponding to at least one component type to output damage component information corresponding to the component type.
It can be said that the damage detection of each different component is different except for the damage detection classification layer of the last layer, and the shared backbone neural network in the front is shared, and after each damage data set passes through the shared backbone neural network, the damage classification of each damage data set is predicted through the damage detection classification layer of the corresponding tail end. The method is characterized in that a plurality of damage detection models are integrated into one model in a mode of sharing a backbone neural network, different from the existing damage detection models with a plurality of parts corresponding to different types, the backbone neural network does not distinguish the types of the parts, different damage detections are carried out by using one network structure, and a plurality of damage detection classification layers correspond to a plurality of damage data sets, so that damage results of all the parts are output.
The difference between the structural block diagram of the vehicle detection model in the related art and the structural block diagram of the integrated damage assessment model in the embodiment will be described in comparison.
Referring to fig. 3, a structural block diagram of a vehicle detection model in the related art is shown, and because there are many vehicle components, the present embodiment is described by selecting three types, namely, a sheet metal part, a glass part, and a tire, and it should be understood that the present invention is not limited to these three types.
The structural block diagram of the vehicle detection model comprises three functional units, which are respectively:
a first unit: a lesion data set comprising: the system comprises a sheet metal part damage detection data set, a glass damage detection data set and a tire damage detection data set. The damage data sets are labeled according to the damage of a specific part, and do not contain the damage labeling information of other parts, and the training samples of the damage data sets may include: a damage image, a damage judgment label, and the like.
A second unit: a single dataset detection model comprising: sheet metal component damage detection model, glass damage detection model and tire damage detection model, damage detection model includes: a damage detection subject and a damage detection classifier. Different detection models are obtained corresponding to different data sets.
A third unit: a loss function comprising: the damage detection system comprises a sheet metal part damage loss function, a glass damage loss function and a tire damage loss function, wherein the loss functions correspond to damage detection models one to one and are used for updating parameters of the damage detection models.
During training, the damage data set of the first unit is respectively input into the single data set detection model of the second unit to obtain corresponding damage detection results, and parameters of the damage detection model are updated by using the loss function of the third unit. For example, the sheet metal part damage detection data set is input into the sheet metal part damage detection model to obtain a sheet metal part damage detection result, and then parameters of the sheet metal part damage detection model are updated by using a sheet metal part damage loss function.
Referring to fig. 3, generally, each detection model of the vehicle detection models in the related art is operated independently and does not interfere with each other. Therefore, there are the following problems: firstly, at model training development stage, need prepare a plurality of damage data sets of mark to training respectively optimizes a plurality of damage detection models, this kind of process of the different damage detection models of division training is loaded down with trivial details, and the inefficiency is relatively low, needs more manpower, material resources to drop into, and the model that the simultaneous training obtained is because the reuse rate of data is little and the robustness is poor. And secondly, in the model deployment online stage, a plurality of models need to consume more computing resources, and meanwhile, the scheduling difficulty, the cost and the like are increased. The structure block diagram of the integration impairment model provided by an embodiment of the present application is explained below with reference to fig. 4.
Referring to fig. 4, a structural diagram of an integrated damage assessment model provided for an embodiment of the present application includes the following functional units.
Because there are many vehicle parts, the present embodiment is described by selecting three types of sheet metal parts, glass, and tires, and it should be understood that the present invention is not limited to these three types.
The structural block diagram of the vehicle detection model comprises four units, which are respectively:
a first unit: a lesion data set comprising: a sheet metal part damage detection data set, a glass damage detection data set, a tire damage detection data set, and the like. The damage data sets are labeled according to the damage of a specific part, and do not contain the damage labeling information of other parts, and the training samples of the damage data sets may include: damage images, damage judgment labels, and the like, for example:
sample 1: the damage image is a glass image, and the damage judgment label is as follows: [ glass, moderate damage, fracture ];
sample 2: the damage image is a sheet metal part image, and the damage judgment label is as follows: [ sheet metal, slight damage, scratching ].
A second unit: and sharing the trunk neural network, wherein the shared trunk neural network is obtained by integrating damage detection models corresponding to at least one component type, such as a sheet metal part damage detection model, a glass damage detection model and a tire damage detection model.
A third unit: different part types correspond to different damage detection classification layers, such as a sheet metal damage detection classification layer, a glass damage detection classification layer, a tire damage detection classification layer and the like. The second unit and the third unit constitute the integrated damage assessment model of the present embodiment.
A fourth unit: a loss function comprising: the damage detection method comprises a sheet metal part damage loss function, a glass damage loss function, a tire damage loss function and the like, wherein the loss functions correspond to damage detection models one to one and are used for updating parameters of the damage detection models. Because the damage types of different damage data sets are all specific parts, the damage types of different damage data sets do not overlap, and damage functions do not need to be combined into one for training.
During training, the damage data set of the first unit is input into the shared trunk neural network of the second unit to be output, and then the corresponding damage detection classification layer is selected according to the damage component of the output. Because the damage data sets are labeled aiming at the damage of a certain specific part, and do not contain the damage labeling information of other parts, the damage types cannot be simply combined, and one detection model is trained to detect the damage of all parts, therefore, under the condition that the target type spaces of a plurality of damage data sets are not combined, an integrated shared trunk neural network is trained, namely a plurality of damage detection models aiming at the specific damage data set are trained in parallel in the shared trunk network.
For example, the sheet metal part damage detection data set is input into the shared trunk neural network and then output to the sheet metal part damage detection classification layer, and then parameters of the sheet metal part damage detection model are updated by using a sheet metal part damage loss function.
Additionally, in one embodiment, the shared backbone neural Network is a Deep Residual neural Network (Deep Residual Network).
For the traditional deep learning network application, the deeper the network, the more the number of layers, the more things can be learned, and of course, the slower the convergence rate and the longer the training time. However, after the depth reaches a certain degree, the learning rate is lower the further the depth is, even in some scenes, the accuracy is reduced by the deeper the network layer number is, and phenomena of gradient disappearance and gradient explosion easily occur. This phenomenon is not due to overfitting, which is a situation where the model is trained too well in the training set, but does not perform well in the new data. In general, the training quasi-error and the testing error of the deep learning network become larger after the number of layers is increased, which means that the deep learning network becomes difficult to train after the number of network layers becomes deeper.
The design of a residual block is introduced into the deep residual neural network, so that the problems that the learning rate is low and the accuracy rate cannot be effectively improved due to deepening of the network depth are solved. The principle of the residual block is to directly skip the data output of several previous layers to the input part of the following data layer. In short, similar to the jump connection, the "clear" data in the front and the "lossy compressed" data in the back are used as the input of the data in the back network together, so that the network can learn richer contents, and therefore the shared backbone neural network of the embodiment adopts the deep residual error neural network.
In an embodiment, correspondingly, referring to fig. 5, the step S120 includes, but is not limited to, the following steps:
and step S121, performing residual error feature vector extraction processing on the damaged vehicle image based on each residual error block sequentially connected in the depth residual error neural network to obtain first output data.
In an embodiment, any one of the residual blocks includes an identity map and at least two convolutional layers, and the identity map of any one of the residual blocks is directed from the input end of any one of the residual blocks to the output end of any one of the residual blocks, i.e. the identity map is the above-mentioned skip connection.
And step S122, inputting the first output data into a damage detection classification layer corresponding to the component type to obtain damaged component information.
In one embodiment, the shared backbone neural network comprises one of: a Res-Net50 network, a Res-Net101 network, a Res-Net110 network, or a Res-Net152 network. Since the related art has more detailed description about the network structure, it is not described herein again.
The training process of the integrated damage assessment model of the present embodiment is described below.
In an embodiment, referring to fig. 6, a flowchart of training an integrated damage assessment model in a vehicle damage detection method provided in an embodiment of the present application includes, but is not limited to, step S610 and step S640.
Step S610, a damage data set corresponding to at least one component type is obtained as a training data set, and the training data set includes a corresponding damage judgment label.
Step S620, inputting the training data set into the shared backbone neural network to obtain feature data, where the feature data is obtained by extracting and processing residual feature vectors, that is, similar to the first output data.
Step S630, inputting the characteristic data into a damage detection classification layer corresponding to the component type to obtain a damaged component information detection result.
And step S640, training to obtain an integrated damage assessment model according to the detection error between the damaged part information detection result and the damage judgment label.
In an embodiment, the damage detection model corresponding to each component type includes a corresponding loss function, and step S640 is specifically described as: and adjusting parameters in the integrated damage assessment model according to the detection error until the loss function meets the convergence condition to obtain the integrated damage assessment model, namely updating the parameters of the damage detection model according to the loss function. In this embodiment, the convergence condition may be: the loss function is minimized, i.e. the parameters for each damage detection model are optimized by minimizing each loss function.
According to the embodiment, all types of damage labeling data are used when the integrated damage assessment model is trained, the robustness of the integrated damage assessment model is obviously improved, and the possibility of overfitting is reduced. Compared with a plurality of damage detection models, one integrated model can obtain all damage detection results through one-time forward reasoning, occupied computing resources are greatly reduced, and consumed computing time is greatly reduced.
In addition, in an embodiment, for the problem that each damage sample in a plurality of damage data sets is unbalanced, when the integrated damage assessment model is trained, a uniform sampling strategy and/or an inter-class balance sampling strategy are/is adopted to sample the training data sets.
1) And (3) uniform sampling strategy: the method comprises the steps of uniformly sampling damage data sets corresponding to all component types by adopting a uniform sampling strategy to obtain training data sets, so that the number of samples among the damage data sets corresponding to different component types is balanced, and the overall performance of an integrated damage assessment model on each damage data set is improved.
2) And (3) an inter-class balance sampling strategy: and when the number of samples between the damage data sets corresponding to different component types is large in difference, oversampling is carried out on the loss data set corresponding to the component type with the small number of samples by adopting an inter-class balance sampling strategy to obtain a training data set. For example, the number of samples in the damage data set of a certain component category is obviously less than that in other damage data sets, the information amount provided by the category is also less, and the strategy is adopted for balanced sampling, so that the training data amount among the categories is ensured to be consistent as much as possible.
The strategy aims to solve the problem of sample imbalance among different classes in the damage data set, and performance of the damage data set is improved. The main objective is to ensure as far as possible that in each batch (batch) of training samples, the probability of each class (sample of a different component type) occurring is the same, and to avoid a constant picture input order.
In this embodiment, two lists are used to obtain samples for each batch by iterative sampling. In each iteration, a category X (e.g., glass) is first sampled in the part category list, then a picture is sampled in the image list of category X (e.g., glass damage dataset), and when the image list of category X is completely traversed, the image list is re-shuffled and traversed from the beginning. This may be done, for example, by "oversampling" the damage data set with a small number of samples, increasing its number of samples, and ensuring that the number of samples of damage data sets for the component category in the batch are balanced. The component category list is processed similarly to ensure that the categories of components in each batch are also balanced. The problem of unbalanced class distribution in the training samples is solved by the above-mentioned inter-class balanced sampling strategy.
Additionally, in one embodiment, the vehicle component segmentation model is pre-trained from a neural network model. In the forward reasoning stage, an image is input into an integrated damage assessment model, the model simultaneously outputs damaged component information for a plurality of damaged data sets, and the specific position to which the damaged component information belongs needs to be determined next.
The following describes specific details of these two vehicle component segmentation models.
1) The vehicle part segmentation model comprises a part segmentation network and a global feature extraction network, the global feature map of the input damaged vehicle image is collected, a plurality of local features corresponding to the global feature map are obtained by the global feature extraction network, so that vehicle segmentation part information can be obtained by the part segmentation network according to the local features, and parameters of the model are updated according to a loss function, so that the vehicle part segmentation model can output damage positions corresponding to the damaged vehicle image.
2) Constructing a corresponding vehicle component segmentation model according to a semantic segmentation idea, wherein the training process comprises the following steps: and generating a training set after labeling the categories in the damaged vehicle image sample, wherein each category comprises corresponding weight, training the semantic segmentation model to be trained by adopting a corresponding loss function, adjusting model parameters of the semantic segmentation model to be trained according to the loss value after obtaining the loss value, and generating a corresponding vehicle component segmentation model constructed according to the semantic segmentation idea.
The above describes a specific structure of the integrated damage assessment model and the vehicle component segmentation model, and referring to fig. 7, a flow chart of a vehicle damage detection method provided in an embodiment of the present application is shown, and the following steps are included in the flow chart:
step S710, a damaged vehicle image is acquired.
And S720, simultaneously inputting the damaged vehicle image into an integrated damage assessment model and a vehicle component segmentation model, wherein the integrated damage assessment model comprises a shared trunk neural network and a plurality of damage detection classification layers.
Step S730, respectively acquiring the damaged part information output by the integrated damage assessment model and the damaged position output by the vehicle part segmentation model.
And step S740, locating the damaged part information according to the damaged position to obtain the damaged information of the damaged vehicle, and only keeping the matched damaged information under the target part. The method comprises the steps of combining a vehicle component segmentation model, carrying out component positioning on a detection result of an integrated damage assessment model, obtaining damaged component information and a damaged position thereof, and filtering damage false detection of a background area.
For example, according to the input damaged vehicle image, the damaged part information output by the integrated damage assessment model includes: for example, the names of the damaged parts of the vehicle are sheet metal parts and glass, the damage information is scraping and cracking, and the damage degree is moderate damage.
Obtaining a damage position corresponding to the damaged vehicle in the image of the damaged vehicle through the vehicle component segmentation model, for example: the right side of the front windshield.
And then merging the damaged position and the damaged part information to obtain the damaged information of the damaged vehicle, wherein the damaged information comprises the following information: the right side of the front windshield is cracked to a moderate damage degree, namely damage detection results of all glass in the glass segmentation area are reserved, and possible damage detection results of tires or sheet metal parts are removed.
The embodiment of the application provides a vehicle damage detection method, which includes the steps of obtaining damaged vehicle images, inputting the damaged vehicle images into a trained integrated damage assessment model to obtain damaged part information, obtaining damaged positions corresponding to the damaged vehicle images according to a vehicle part segmentation model, and obtaining damage information of a vehicle according to the damaged positions and the damaged part information. Wherein the integrated damage assessment model comprises: the shared trunk neural network is obtained by integrating the damage detection models corresponding to the at least one component type, model optimization training is facilitated in an integrated mode, and marking data of all damage types can be used during training, so that the robustness of the models is effectively improved, and the possibility of overfitting is reduced. Compared with a plurality of separated damage detection models in the related technology, the integrated damage assessment model is used for obtaining all damage component information through one-time forward reasoning, the occupied computing resources are greatly reduced, the consumed computing time is greatly reduced, the detection efficiency is effectively improved, and the labor cost in the vehicle damage assessment process is reduced. After the damage positions corresponding to the damaged vehicle images obtained by the vehicle part segmentation model are combined, the damage information of the vehicle containing the positioning information can be obtained; on the other hand, since the damage position of the component is included, the damage false detection of the background region can be filtered, and the detection accuracy of the vehicle damage detection can be further improved.
In addition, an embodiment of the present application further provides a vehicle damage detection apparatus, and with reference to fig. 8, the apparatus includes:
an acquire image module 810 for acquiring an image of a damaged vehicle;
a damaged component information determining module 820, configured to input the damaged vehicle image into a trained integrated damage assessment model to obtain damaged component information, where the integrated damage assessment model includes:
the shared trunk neural network is obtained by integrating damage detection models corresponding to at least one component type, and the shared trunk neural network is used for preprocessing the damaged vehicle image to obtain first output data;
the damage detection classification layer corresponding to at least one component type is used for processing the first output data to obtain damaged component information;
the image segmentation module 830 is configured to input the damaged vehicle image into the vehicle component segmentation model, so as to obtain a damaged position corresponding to the damaged vehicle image;
and the damage information synthesis module 840 is used for positioning and obtaining the damage information of the vehicle from the damaged part information according to the damage position.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
It should be noted that the vehicle damage detection apparatus in the present embodiment may execute the vehicle damage detection method in the embodiment shown in fig. 2. That is, the vehicle damage detection device in the present embodiment and the vehicle damage detection method in the embodiment shown in fig. 2 both belong to the same inventive concept, and therefore, these embodiments have the same implementation principle and technical effect, and are not described in detail herein.
In addition, an embodiment of the present application further provides a computer device, where the computer device includes: a memory, a processor, and a computer program stored on the memory and executable on the processor.
The processor and memory may be connected by a bus or other means.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The non-transitory software programs and instructions required to implement the vehicle damage detection method of the above-described embodiment are stored in the memory, and when executed by the processor, perform the vehicle damage detection method of the above-described embodiment, for example, performing the above-described method steps S110 and S140 in fig. 2, method steps S510 to 530 in fig. 5, method steps S610 to S640 in fig. 6, and the like.
Furthermore, an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, which are executed by a processor or a controller, for example, by a processor in the above-mentioned computer device embodiment, and can cause the above-mentioned processor to execute the native capability expansion method based on the API interface in the above-mentioned embodiment, for example, execute the above-mentioned method steps S110 and S140 in fig. 2, the method steps S510 to 530 in fig. 5, the method steps S610 to S640 in fig. 6, and the like.
For another example, if executed by one of the processors in the above-mentioned computer device embodiments, the processor may be caused to execute the vehicle damage detection method in the above-mentioned embodiments, for example, to execute the above-mentioned method steps S110 and S140 in fig. 2, method steps S510 to 530 in fig. 5, method steps S610 to S640 in fig. 6, and the like.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described in detail, it will be understood, however, that the present invention is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the invention as defined by the appended claims.

Claims (10)

1. A vehicle damage detection method, comprising:
acquiring a damaged vehicle image;
inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged component information, wherein the integrated damage assessment model comprises:
the shared trunk neural network is obtained by integrating damage detection models corresponding to at least one component type, and is used for preprocessing the damaged vehicle image to obtain first output data;
at least one damage detection classification layer corresponding to the component type, which is used for processing the first output data to obtain damaged component information;
inputting the damaged vehicle image into a vehicle component segmentation model to obtain a damage position corresponding to the damaged vehicle image;
and positioning the damaged part information according to the damaged position to obtain the damaged information of the vehicle.
2. The vehicle injury detection method of claim 1, wherein the shared trunk neural network is a deep residual neural network, the deep residual neural network comprising: a Res-Net50 network, a Res-Net101 network, a Res-Net110 network, or a Res-Net152 network;
the step of inputting the damaged vehicle image into a trained integrated damage assessment model to obtain damaged part information includes:
based on all the residual blocks sequentially connected in the deep residual neural network, performing residual characteristic vector extraction processing on the damaged vehicle image to obtain first output data; any one residual block comprises an identity mapping and at least two convolution layers, and the identity mapping of any one residual block is directed to the output end of any one residual block from the input end of any one residual block;
and inputting the first output data to a damage detection classification layer corresponding to the component type to obtain damaged component information.
3. The vehicle damage detection method of claim 1, wherein the integrated damage assessment model is trained by the following training process:
acquiring a damage data set corresponding to at least one component type as a training data set, wherein the training data set comprises corresponding damage judgment labels;
inputting the training data set into the shared trunk neural network to obtain characteristic data;
inputting the characteristic data into a damage detection classification layer corresponding to the component type to obtain a damaged component information detection result;
and training to obtain the integrated damage assessment model according to the detection error between the damaged part information detection result and the damage judgment label.
4. The vehicle damage detection method according to claim 3, wherein the damage detection model corresponding to each component type includes a corresponding loss function, and wherein the training of the integrated damage assessment model according to the detection error between the damaged component information detection result and the damage judgment tag further includes:
and adjusting parameters in the integrated damage assessment model according to the detection error until the loss function meets a convergence condition to obtain the integrated damage assessment model.
5. The vehicle damage detection method of claim 4, wherein the obtaining a damage data set corresponding to at least one component type as a training data set comprises:
and uniformly sampling the damage data sets corresponding to each component type by adopting a uniform sampling strategy to obtain a training data set so as to balance the number of samples among the loss data sets corresponding to different component types.
6. The vehicle damage detection method of claim 4, wherein the obtaining a damage data set corresponding to at least one component type as a training data set comprises:
and when the number of samples between the damage data sets corresponding to different component types is large in difference, oversampling is carried out on the loss data set corresponding to the component type with the small number of samples by adopting an inter-class balance sampling strategy to obtain a training data set.
7. The vehicle damage detection method according to any one of claims 1 to 6, characterized in that the component type includes: sheet metal parts, glass and tires;
the shared trunk neural network is obtained by integrating a sheet metal part damage detection model, a glass damage detection model and a tire damage detection model;
the damage detection classification layer includes: a sheet metal damage detection classification layer, a glass damage detection classification layer or a tire damage detection classification layer;
the damaged part information includes: damaged part name, damaged state, and degree of damage;
the damaged part name includes: one or more of sheet metal parts, glass or tires;
the damage information includes: one or more of scratches, dents, wrinkles, tears, deletions, or ruptures;
the degree of damage includes: one or more of mild injury, moderate injury, or severe injury.
8. A vehicle damage detection device, characterized by comprising:
the image acquisition module is used for acquiring an image of the damaged vehicle;
a damaged component information determination module, configured to input the damaged vehicle image into a trained integrated damage assessment model to obtain damaged component information, where the integrated damage assessment model includes:
the shared trunk neural network is obtained by integrating damage detection models corresponding to at least one component type, and is used for preprocessing the damaged vehicle image to obtain first output data;
at least one damage detection classification layer corresponding to the component type, which is used for processing the first output data to obtain damaged component information;
the image segmentation module is used for inputting the damaged vehicle image into a vehicle component segmentation model to obtain a damage position corresponding to the damaged vehicle image;
and the damage information synthesis module is used for positioning and obtaining the damage information of the vehicle from the damaged part information according to the damaged position.
9. A computer device comprising a processor and a memory;
the memory is used for storing programs;
the processor is configured to execute the vehicle damage detection method according to any one of claims 1 to 7 in accordance with the program.
10. A computer-readable storage medium storing computer-executable instructions for performing the vehicle damage detection method of any one of claims 1 to 7.
CN202111080117.1A 2021-09-15 2021-09-15 Vehicle damage detection method, device, equipment and storage medium Active CN113780435B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111080117.1A CN113780435B (en) 2021-09-15 2021-09-15 Vehicle damage detection method, device, equipment and storage medium
PCT/CN2022/071075 WO2023040142A1 (en) 2021-09-15 2022-01-10 Vehicle damage detection method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111080117.1A CN113780435B (en) 2021-09-15 2021-09-15 Vehicle damage detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113780435A true CN113780435A (en) 2021-12-10
CN113780435B CN113780435B (en) 2024-04-16

Family

ID=78844155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111080117.1A Active CN113780435B (en) 2021-09-15 2021-09-15 Vehicle damage detection method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113780435B (en)
WO (1) WO2023040142A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040142A1 (en) * 2021-09-15 2023-03-23 平安科技(深圳)有限公司 Vehicle damage detection method and apparatus, and device and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883444B (en) * 2023-08-02 2024-01-12 武汉理工大学 Automobile damage detection method based on machine vision and image scanning
CN116910495B (en) * 2023-09-13 2024-01-26 江西五十铃汽车有限公司 Method and system for detecting off-line of automobile, readable storage medium and automobile

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359676A (en) * 2018-10-08 2019-02-19 百度在线网络技术(北京)有限公司 Method and apparatus for generating vehicle damage information
CN109657596A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance component identification method based on deep learning
CN109657716A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance damnification recognition method based on deep learning
CN110443814A (en) * 2019-07-30 2019-11-12 北京百度网讯科技有限公司 Damage identification method, device, equipment and the storage medium of vehicle
CN110728236A (en) * 2019-10-12 2020-01-24 创新奇智(重庆)科技有限公司 Vehicle loss assessment method and special equipment thereof
US20210182713A1 (en) * 2019-12-16 2021-06-17 Accenture Global Solutions Limited Explainable artificial intelligence (ai) based image analytic, automatic damage detection and estimation system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780435B (en) * 2021-09-15 2024-04-16 平安科技(深圳)有限公司 Vehicle damage detection method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359676A (en) * 2018-10-08 2019-02-19 百度在线网络技术(北京)有限公司 Method and apparatus for generating vehicle damage information
CN109657596A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance component identification method based on deep learning
CN109657716A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance damnification recognition method based on deep learning
CN110443814A (en) * 2019-07-30 2019-11-12 北京百度网讯科技有限公司 Damage identification method, device, equipment and the storage medium of vehicle
CN110728236A (en) * 2019-10-12 2020-01-24 创新奇智(重庆)科技有限公司 Vehicle loss assessment method and special equipment thereof
US20210182713A1 (en) * 2019-12-16 2021-06-17 Accenture Global Solutions Limited Explainable artificial intelligence (ai) based image analytic, automatic damage detection and estimation system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040142A1 (en) * 2021-09-15 2023-03-23 平安科技(深圳)有限公司 Vehicle damage detection method and apparatus, and device and storage medium

Also Published As

Publication number Publication date
WO2023040142A1 (en) 2023-03-23
CN113780435B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN113780435B (en) Vehicle damage detection method, device, equipment and storage medium
CN109902732B (en) Automatic vehicle classification method and related device
CN110245678B (en) Image matching method based on heterogeneous twin region selection network
CN112613375B (en) Tire damage detection and identification method and equipment
CN113033604A (en) Vehicle detection method, system and storage medium based on SF-YOLOv4 network model
CN109886147A (en) A kind of more attribute detection methods of vehicle based on the study of single network multiple-task
CN113762209A (en) Multi-scale parallel feature fusion road sign detection method based on YOLO
CN111435446A (en) License plate identification method and device based on L eNet
CN109657599B (en) Picture identification method of distance-adaptive vehicle appearance part
CN111121797B (en) Road screening method, device, server and storage medium
CN113159024A (en) License plate recognition technology based on improved YOLOv4
CN112115975A (en) Deep learning network model fast iterative training method and equipment suitable for monitoring device
CN111444911B (en) Training method and device of license plate recognition model and license plate recognition method and device
CN116580271A (en) Evaluation method, device, equipment and storage medium for perception fusion algorithm
CN112949459A (en) Smoking image recognition method and device, storage medium and electronic equipment
CN115272222A (en) Method, device and equipment for processing road detection information and storage medium
CN111435445A (en) Training method and device of character recognition model and character recognition method and device
CN110853069A (en) Neural network model construction method and system for vehicle appearance segmentation
CN112269827B (en) Data processing method and device, computer equipment and computer readable storage medium
CN110765900B (en) Automatic detection illegal building method and system based on DSSD
CN116861262A (en) Perception model training method and device, electronic equipment and storage medium
CN115984723A (en) Road damage detection method, system, device, storage medium and computer equipment
CN111291721A (en) Exit detection method, device, equipment and storage medium for shared vehicles
CN114972725B (en) Model training method, readable medium and electronic device
CN117036843A (en) Target detection model training method, target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant