CN113705351A - Vehicle damage assessment method, device and equipment - Google Patents

Vehicle damage assessment method, device and equipment Download PDF

Info

Publication number
CN113705351A
CN113705351A CN202110860166.0A CN202110860166A CN113705351A CN 113705351 A CN113705351 A CN 113705351A CN 202110860166 A CN202110860166 A CN 202110860166A CN 113705351 A CN113705351 A CN 113705351A
Authority
CN
China
Prior art keywords
vehicle
damage
component
vehicle damage
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110860166.0A
Other languages
Chinese (zh)
Other versions
CN113705351B (en
Inventor
申颖
徐晓丹
朱向雷
侯明智
孟菲
刘英男
朱倩倩
张冬华
李薇
赵子豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank Of China Insurance Information Technology Management Co ltd
Automotive Data of China Tianjin Co Ltd
Original Assignee
Bank Of China Insurance Information Technology Management Co ltd
Automotive Data of China Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank Of China Insurance Information Technology Management Co ltd, Automotive Data of China Tianjin Co Ltd filed Critical Bank Of China Insurance Information Technology Management Co ltd
Priority to CN202110860166.0A priority Critical patent/CN113705351B/en
Publication of CN113705351A publication Critical patent/CN113705351A/en
Application granted granted Critical
Publication of CN113705351B publication Critical patent/CN113705351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Technology Law (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a vehicle damage assessment method, device and equipment, relates to the technical field of artificial intelligence, and can be used for integrating vehicle damage information more reasonably and improving the accuracy of a vehicle damage assessment result. The method comprises the following steps: acquiring vehicle image information, wherein the vehicle image information comprises a plurality of vehicle damage images which are shot in multiple directions and subjected to preprocessing; recognizing each vehicle damage image by using a pre-trained recognition model to obtain a vehicle damage recognition result and a vehicle part recognition result corresponding to each vehicle damage image, and determining vehicle damage information corresponding to each vehicle damage image according to the vehicle damage recognition result and the vehicle part recognition result; cleaning the vehicle damage information corresponding to each vehicle damage image according to a preset logic rule, and deleting and/or correcting the vehicle damage information which does not meet the damage judgment condition; and integrating the cleaned vehicle damage information of all the images and outputting a final vehicle damage assessment result.

Description

Vehicle damage assessment method, device and equipment
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a vehicle damage assessment method, device and equipment.
Background
The image recognition technology is utilized to carry out damage assessment on the vehicle, so that an insurance company can be assisted to carry out accident vehicle investigation and damage assessment work, and cost reduction and efficiency improvement are realized. The existing vehicle image damage assessment method mainly depends on an image recognition algorithm to judge damage, however, as the shooting angle and the shooting distance of a vehicle damage image in an accident scene are difficult to unify, and are influenced by the shooting light intensity of the vehicle damage image and the surrounding environment, and the appearance difference of accessories of different vehicle types, the damage is judged by the image recognition algorithm, and higher accuracy is difficult to achieve.
Disclosure of Invention
In view of the above, the present application provides a method, an apparatus, and a device for vehicle loss assessment, and mainly aims to solve the problem that the accuracy of vehicle loss assessment by using the image recognition technology is not high in the prior art.
According to a first aspect of the present application, there is provided a vehicle damage assessment method, the method comprising:
acquiring vehicle image information, wherein the vehicle image information comprises a plurality of vehicle damage images which are shot in multiple directions and subjected to preprocessing;
recognizing each vehicle damage image by using a pre-trained recognition model to obtain a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image, wherein the vehicle damage recognition result comprises a damage type and a damage area of a vehicle, and the vehicle component recognition result comprises a component type and a component area of the vehicle;
determining vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result corresponding to each vehicle damage image and the vehicle part identification result, wherein the vehicle damage information comprises a damaged part, a damage type and a damaged area ratio;
cleaning the vehicle damage information corresponding to each vehicle damage image according to a preset logic rule, and deleting and/or correcting the vehicle damage information which does not meet the damage judgment condition to obtain the cleaned vehicle damage information of all the images;
and integrating the cleaned vehicle damage information of all the images and outputting a final vehicle damage assessment result. Further, after the acquiring the vehicle image information, the method further includes:
and deleting and/or adjusting the vehicle damage images which do not accord with the shooting conditions by preprocessing the vehicle damage images shot in multiple directions.
Further, the recognition model includes a damage recognition model and a component recognition model, and before the recognition model trained in advance is used to recognize each vehicle damage image and obtain a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image, the method further includes:
collecting historical case images of different accident vehicles in advance;
marking the vehicle appearance damage and the vehicle appearance component appearing in the historical case image in the form of a characteristic example graph to form vehicle damage marking data and vehicle component marking data;
respectively inputting historical case images carrying vehicle damage labeling data and vehicle component labeling data into a semantic segmentation model based on a deep learning framework as sample images for training, and constructing a damage identification model and a component identification model, wherein the damage identification model is used for identifying the vehicle damage images in combination with the vehicle damage labeling data, and the component identification model is used for identifying the vehicle damage images in combination with the vehicle component labeling data.
Further, the labeling of the vehicle appearance damage and the vehicle appearance component appearing in the historical case image in the form of a characteristic example graph to form vehicle damage labeling data and vehicle component labeling data specifically includes:
marking the vehicle appearance damage appearing in the historical case image in a polygonal pixel semantic segmentation mode to form vehicle damage marking data;
and labeling the vehicle appearance component appearing in the historical case image in a polygonal pixel semantic segmentation mode to obtain component type and component area, and forming vehicle component labeling data.
Further, after the pre-trained recognition model is used to recognize each vehicle damage image, and a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image are obtained, the method further includes:
comparing the coordinates of the area position of each part in the vehicle part identification result by using a pre-established vehicle appearance part distribution relation table, wherein the area relation information of vehicle part distribution is recorded in the part distribution relation table;
and if the coordinate comparison result represents that the part has a covering relationship, carrying out region matting processing on the part covered by other parts.
Further, the determining the vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result corresponding to each vehicle damage image and the vehicle component identification result specifically includes:
respectively converting the vehicle damage identification result and the vehicle component identification result into matrix form representations to obtain a damage matrix and a component matrix;
calculating a Hadamard product of the damage matrix and the component matrix to obtain a damaged component comparison matrix;
and determining vehicle damage information by utilizing the damaged component comparison matrix, wherein the vehicle damage information comprises damaged components, damage types and damage area ratios.
Further, the determining the vehicle damage information by using the damaged component comparison matrix specifically includes:
traversing the damage matrix and the component matrix, and judging whether the damaged component comparison matrix is zero or not; if so, determining that the damage does not occur in the corresponding vehicle component area; otherwise, judging that the damage occurs in the corresponding vehicle component area;
and summarizing the vehicle damage identification result of the damage occurring in the corresponding vehicle component area and the vehicle component identification result to obtain vehicle damage information.
Further, the integrating the washed vehicle damage information of all the images and outputting a final vehicle damage assessment result specifically includes:
integrating vehicle damage information of the same damage type under the same component according to the damage area ratio, and reserving the vehicle damage information of the same damage type under the same component with the largest damage area ratio;
aiming at vehicle damage information of different damage types under the same component, integrating the vehicle damage information according to a pre-established damage type severity sequence, and reserving the vehicle damage information with the highest damage type severity under the same component;
and summarizing the vehicle damage information of all the components and outputting a final vehicle damage assessment result.
According to a second aspect of the present application, there is provided a vehicle damage assessment apparatus comprising:
the vehicle damage detection device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring vehicle image information which comprises a plurality of vehicle damage images shot in multiple directions and subjected to preprocessing;
the vehicle damage recognition system comprises a recognition unit, a processing unit and a processing unit, wherein the recognition unit is used for recognizing each vehicle damage image by using a pre-trained recognition model to obtain a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image, the vehicle damage recognition result comprises a damage type and a damage area of a vehicle, and the vehicle component recognition result comprises a component type and a component area of the vehicle;
the determining unit is used for determining vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result corresponding to each vehicle damage image and the vehicle component identification result, and the vehicle damage information comprises a damaged component, a damage type and a damaged area ratio;
the cleaning unit is used for cleaning the vehicle damage information corresponding to each vehicle damage image according to a preset logic rule, and deleting and/or correcting the vehicle damage information which does not meet the damage judgment condition to obtain the cleaned vehicle damage information of all the images;
and the integration unit is used for integrating the cleaned vehicle damage information of all the images and outputting a final vehicle damage assessment result.
Further, the apparatus further comprises:
and the processing unit is used for deleting and/or adjusting the vehicle damage image which does not accord with the shooting condition by preprocessing the vehicle damage image shot in multiple directions after the vehicle image information is obtained.
Further, the identification model includes a damage identification model and a component identification model, and the apparatus further includes:
the collecting unit is used for collecting historical case images of different accident vehicles in advance before the pre-trained recognition model is used for recognizing each vehicle damage image to obtain a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image;
the labeling unit is used for labeling the vehicle appearance damage and the vehicle appearance component appearing in the historical case image in the form of a characteristic example graph to form vehicle damage labeling data and vehicle component labeling data;
the training unit is used for inputting the historical case images carrying the vehicle damage labeling data and the vehicle component labeling data into a semantic segmentation model based on a deep learning framework as sample images respectively for training, and constructing a damage identification model and a component identification model, wherein the damage identification model is used for identifying the vehicle damage images by combining the vehicle damage labeling data, and the component identification model is used for identifying the vehicle damage images by combining the vehicle component labeling data.
Further, the labeling unit is specifically configured to label a damage type and a damage area of the vehicle appearance damage appearing in the historical case image in a polygonal pixel semantic segmentation form to form vehicle damage labeling data;
the labeling unit is specifically configured to label the vehicle appearance component appearing in the historical case image in a component type and a component area in a polygonal pixel semantic segmentation mode to form vehicle component labeling data.
Further, the apparatus further comprises:
the comparison unit is used for comparing the coordinates of the area position of each part in the vehicle part identification result by using a pre-established vehicle appearance part distribution relation table after the pre-trained identification model is used for identifying each vehicle damage image to obtain the vehicle damage identification result and the vehicle part identification result corresponding to each vehicle damage image, and the area relation information of vehicle part distribution is recorded in the part distribution relation table;
and the matting unit is used for carrying out region matting processing on the part covered by other parts if the coordinate comparison result represents that the part has a covering relationship.
Further, the determining unit includes:
the conversion module is used for respectively converting the vehicle damage identification result and the vehicle component identification result into matrix form representations to obtain a damage matrix and a component matrix;
the calculation module is used for calculating the Hadamard product of the damage matrix and the component matrix to obtain a damaged component comparison matrix;
and the determining module is used for determining vehicle damage information by utilizing the damaged component comparison matrix, wherein the vehicle damage information comprises damaged components, damage types and damage area ratios.
Further, the determining module includes:
the judgment submodule is used for traversing the damage matrix and the component matrix and judging whether the damage component comparison matrix is zero or not; if so, determining that the damage does not occur in the corresponding vehicle component area; otherwise, judging that the damage occurs in the corresponding vehicle component area;
and the summarizing submodule is used for summarizing the vehicle damage identification result of the damage occurring in the corresponding vehicle component area and the vehicle component identification result to obtain vehicle damage information.
Further, the integration unit includes:
the first integration module is used for integrating the vehicle damage information according to the damage area ratio aiming at the vehicle damage information of the same damage type under the same component, and reserving the vehicle damage information with the largest damage area ratio of the same damage type under the same component;
the second integration module is used for integrating the vehicle damage information according to the pre-established damage type severity sequence aiming at the vehicle damage information of different damage types under the same component and reserving the vehicle damage information with the highest damage type severity under the same component;
and the summarizing module is used for summarizing the vehicle damage information of all the components and outputting a final vehicle damage assessment result.
According to a third aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described vehicle damage assessment method.
According to a fourth aspect of the present application, there is provided a vehicle damage assessment apparatus comprising a storage medium, a processor and a computer program stored on the storage medium and operable on the processor, the processor implementing the vehicle damage assessment method when executing the program.
By the technical scheme, compared with the mode of judging the vehicle damage image by depending on an image recognition algorithm in the existing mode, the vehicle damage image determining method, the vehicle damage image determining device and the vehicle damage image determining equipment provided by the application have the advantages that the vehicle image information is obtained and comprises a plurality of vehicle damage images which are shot in multiple directions and are subjected to preprocessing, the images under different shooting conditions are subjected to standardized input, each vehicle damage image is recognized by utilizing a pre-trained recognition model, the vehicle damage recognition result and the vehicle part recognition result corresponding to each vehicle damage image are obtained, the vehicle damage information corresponding to each vehicle damage image is determined according to the vehicle damage recognition result and the vehicle part recognition result, and the vehicle damage information comprises damaged parts, damage types and the like, The method comprises the steps of calculating the damage area ratio, cleaning vehicle damage information corresponding to each vehicle damage image according to preset logic rules, deleting and/or correcting the vehicle damage information which does not meet damage judgment conditions to obtain the cleaned vehicle damage information of all the images, and finally integrating the cleaned vehicle damage information of all the images to output a final vehicle damage assessment result.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic flow chart illustrating a vehicle damage assessment method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating another vehicle damage assessment method provided by the embodiment of the present application;
3a-3g illustrate a block flow diagram of a vehicle damage assessment method provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram illustrating a vehicle damage assessment device according to an embodiment of the present application;
fig. 5 shows a schematic structural diagram of another vehicle damage assessment device provided in the embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In the related technology, the vehicle insurance can be assisted to rapidly judge an accident responsible party and an accident claim settling party by vehicle damage assessment, so that vehicle insurance personnel can conveniently execute a corresponding claim settling scheme according to a vehicle damage assessment result. The existing vehicle image damage assessment method mainly depends on an image recognition algorithm to judge a vehicle damage image, however, due to the fact that the shooting angle and the shooting distance of the vehicle damage image in an accident scene are difficult to unify, the shooting angle and the shooting distance are affected by the shooting light intensity of the vehicle damage image and the surrounding environment, and the appearance difference of accessories of different vehicle types is large, the accuracy of the result of vehicle damage assessment performed by the image recognition algorithm is not high.
In order to solve the problem, the embodiment provides a vehicle damage assessment method, as shown in fig. 1, which can be applied to a service end of an accident handling platform, and includes the following steps:
101. vehicle image information is acquired.
Wherein, vehicle image information can be for following many vehicle damage images of multi-azimuth shooting and through the preliminary treatment, under the ordinary condition, reconnaissance personnel can be from the scene of the accident collection vehicle damage image after the traffic accident case takes place, for the scene of the all-round vehicle damage image of gathering the accident, can follow different shooting angles, the shooting terminal is adjusted to different shooting distances, and carry out the preliminary treatment to many vehicle damage images of multi-azimuth shooting, like luminosity regulation, pixel adjustment etc. so that the vehicle damage image that obtains more accords with actual acquisition demand.
For vehicle image information, pictures of vehicles at different angles and different distances can be shot through an intelligent shooting terminal and then uploaded to a server, and the pictures can also be uploaded to the server in batches through a computer stored with pictures of the vehicles collected in advance, wherein the pictures of the vehicles are accident vehicle damage pictures, and the pictures contain vehicle damage parts.
The execution main body of the embodiment of the invention can be a vehicle damage assessment device, can be a service end of an accident processing platform, and can be used for acquiring a vehicle damage identification result and a vehicle part identification result by acquiring vehicle image information, taking the vehicle image information as a vehicle damage assessment basis in an accident process and respectively inputting different identification channels aiming at the vehicle image information.
102. And recognizing each vehicle damage image by using a pre-trained recognition model to obtain a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image.
The identification model comprises a damage identification model for identifying damage characteristics in a vehicle damage image and a component identification model for identifying component characteristics in the vehicle damage image, each vehicle damage image is identified by the damage identification model, a vehicle damage identification result corresponding to each vehicle damage image can be obtained, the vehicle damage identification result comprises a damage type and a damage area of a vehicle, each vehicle damage image is identified by the component identification model, a vehicle component identification result can be obtained, and the vehicle component identification result comprises a component type and a component area of the vehicle.
Before each vehicle damage image is identified by using a damage identification model, the damage identification model needs to be trained in advance, specifically, a vehicle damage image which is labeled with a damage type and a damage region in advance can be input into a semantic segmentation neural network model for training to obtain the damage identification model, a professional can mark the damage type and the damage part in the vehicle image information by using a labeling tool in the labeling process, the damage type can comprise scratch, crack, corner deformation and the like, the damage region can be the specific edge position of the damage, and therefore the vehicle damage information in the vehicle damage image is effectively identified. Similarly, a part identification model needs to be trained in advance, specifically, a vehicle damage image which is labeled with a part type and a part area in advance can be input into a semantic segmentation neural network model for training to obtain the part identification model, a professional can mark the part type and the part area in the vehicle image information by using a labeling tool in the labeling process, the part type can be divided by combining the left, right, front and rear directions with the appearance part name, and the part area can be position information of the part distributed on the vehicle, so that the vehicle part information in the vehicle damage image can be effectively identified.
103. And determining vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result corresponding to each vehicle damage image and the vehicle component identification result.
Since the vehicle damage identification result can only identify the damage type and the damage area of the vehicle, but cannot judge the damaged vehicle component, the vehicle damage identification result and the vehicle component identification result are combined to judge the damage, the component type of each damage is judged, and the vehicle damage information is accurately positioned.
Specifically, the damage type of the vehicle in the vehicle damage identification result can be labeled in the vehicle image according to the damage area, the damage area is compared with the component area in the vehicle component identification result for each damage area, if the comparison areas are overlapped, damage occurs in the component area, and further the corresponding damage type and the component type can be determined, otherwise, the component area is not damaged.
104. And cleaning the vehicle damage information corresponding to each vehicle damage image according to a preset logic rule, and deleting and/or correcting the vehicle damage information which does not meet the damage judgment condition to obtain the cleaned vehicle damage information of all the images.
It can be understood that the vehicle damage information formed by combining the vehicle damage identification result and the vehicle component identification result may include a damage result which does not conform to manual experience, and a logic rule may be preset to clean the vehicle damage information, where the logic rule is equivalent to a series of logic rules listed by professionals, and the logic rule can be used to screen out the vehicle damage information which does not conform to the damage determination condition, and further delete or correct the vehicle damage information, and the vehicle damage information can be directly deleted if the vehicle damage information obviously does not conform to the damage determination condition, such as the damage type has obvious errors; when the left and right directions of the damaged part are recognized incorrectly, the vehicle damage information can be corrected.
105. And integrating the cleaned vehicle damage information of all the images and outputting a final vehicle damage assessment result.
Repeated damage information can be identified by a plurality of vehicle damage images of the same accident vehicle, and certain information redundancy exists. The process of integrating the vehicle damage information after cleaning of all the images mainly includes two aspects, that is, on one hand, the vehicle damage information can be integrated based on the same damage type under the same component, and on the other hand, the vehicle damage information can be integrated based on different damage types under the same component.
The vehicle damage information is integrated based on the same damage type of the same component, the vehicle damage information in the same damage type can be summarized aiming at the component, the vehicle damage information can be specifically integrated according to the damage area, the vehicle damage information with the largest damage area is reserved after the summary, the vehicle damage information with the most serious damage type is reserved after the summary, and the summarizing mode is not limited. The vehicle damage information is integrated based on different damage types of the same component, the vehicle damage information can be integrated according to the corresponding severity of the damage types, the vehicle damage information with the maximum damage severity is reserved after the summary, the vehicle damage information can also be integrated according to the number of maintenance items corresponding to the damage types, the vehicle damage information with the maximum maintenance number is reserved after the summary, and the summary mode is not limited.
Compared with the mode of judging the vehicle damage image by depending on an image recognition algorithm in the existing mode, the vehicle damage assessment method provided by the embodiment of the application realizes standardized input of images under different shooting conditions by acquiring the vehicle image information which comprises a plurality of vehicle damage images shot in multiple directions and preprocessed, identifies each vehicle damage image by utilizing a pre-trained identification model to obtain a vehicle damage identification result and a vehicle part identification result corresponding to each vehicle damage image so as to identify the vehicle damage image at a pixel level, determines the vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result and the vehicle part identification result, and cleans the vehicle damage information corresponding to each vehicle damage image according to a preset logic rule And finally, the vehicle damage information after being cleaned of all the images is integrated and processed to output a final vehicle damage assessment result.
Further, as a refinement and an extension of the specific implementation of the foregoing embodiment, in order to fully illustrate the specific implementation process of the present embodiment, the present embodiment provides another vehicle damage assessment method, as shown in fig. 2, where the method includes:
201. vehicle image information is acquired.
It can be understood that since the vehicle image information may have a fraudulent behavior in the uploading process, in order to ensure the accuracy of the vehicle image information, it may be detected whether the vehicle image information has a false feature after the vehicle image information is acquired by using a preset analysis toolkit; and if so, outputting a risk prompt aiming at the vehicle picture information. The false feature detection mainly relates to two aspects, on one hand, whether a vehicle picture is subjected to PS processing is detected, specifically, whether the vehicle picture is subjected to PS processing can be judged by analyzing EXIF information of the picture, analyzing noise point distribution of the picture and other detection methods, and if a detection result in any one of the detection methods represents that the vehicle picture is subjected to PS processing, risk early warning of the vehicle picture is output; on the other hand, the shooting time of the vehicle picture is detected, specifically, the vehicle picture in the same accident case can be detected, if the shooting time is within the same day, the risk early warning is not carried out, and if the shooting time is not within the same day, the risk early warning of the vehicle picture is output; of course, the detection may also be performed for a shooting device, a shooting scene, and the like, and the detection method for the vehicle picture is not limited here.
202. And deleting and/or adjusting the vehicle damage images which do not accord with the shooting conditions by preprocessing the vehicle damage images shot in multiple directions.
In general, the vehicle image information may include a plurality of vehicle pictures shot at different angles, not every picture has a damage, or the uploaded vehicle image information includes vehicle pictures that do not meet shooting requirements, such as a blurred view, an ultra-close view, and the like, where the vehicle image information may be cleaned, for example, the vehicle pictures that are seriously blurred, do not include a damage, and an ultra-close view are deleted, and the vehicle image information may also be adjusted by using a picture preprocessing technique, for example, image alignment and picture luminosity adjustment may be performed on a non-normal angle shot image and a high exposure image.
203. Historical case images of different accident vehicles are collected in advance.
The historical case images of different accident vehicles collected here can cover vehicles of different vehicle types on the market, and meanwhile various vehicle damage images shot from different directions and different distances are included, so that the images shot by damage-determining personnel in an actual accident scene are as close as possible.
204. And marking the vehicle appearance damage and the vehicle appearance component appearing in the historical case image in the form of a characteristic example graph to form vehicle damage marking data and vehicle component marking data.
The characteristic example graph is equivalent to a legend basis for dividing vehicle appearance damage and vehicle appearance components, damage types and damage areas of vehicle appearance damage appearing in the historical case image can be labeled in a polygonal pixel semantic segmentation mode according to vehicle damage marking data, and component types and component areas of the vehicle appearance components appearing in the historical case image can be labeled in a polygonal pixel semantic segmentation mode according to vehicle component labeling data.
205. And respectively inputting the historical case images carrying the vehicle damage labeling data and the vehicle component labeling data into a semantic segmentation model based on a deep learning framework as sample images for training, and constructing a damage recognition model and a component recognition model.
The damage identification model is used for identifying a vehicle damage image by combining vehicle damage labeling data, a semantic segmentation neural network model based on a Pythrch frame is adopted, the damage identification model obtained by training can realize identification of damage types and damage areas from the vehicle damage image, and particularly in the process of training the damage identification model, the best-effect damage identification model can be obtained through data enhancement, step length adjustment, iteration number optimization and other modes. The component recognition model is used for recognizing the vehicle damage image by combining the vehicle component marking data, a semantic segmentation neural network model based on a Pythrch frame is adopted, the component recognition model obtained through training can recognize the component type and the component area from the vehicle damage image, and similarly, the component recognition model with the best effect can be obtained through data enhancement, step length adjustment, iteration times optimization and other modes in the component recognition model training process.
206. And recognizing each vehicle damage image by using a pre-trained recognition model to obtain a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image.
It should be noted that, because there may be an inclusion or coverage relationship between components in the vehicle component identification result, a pre-established vehicle exterior component distribution relationship table may be used to perform coordinate comparison on the region position of each component in the vehicle component identification result, the component distribution relationship table records region relationship information of vehicle component distribution, if the coordinate comparison result represents that a component has a coverage relationship, then region matting processing is performed on the component covered by other components, specifically, region matting may be performed on a shielding component or an inclusion component, so as to achieve accurate output of the component identification result.
207. And determining vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result corresponding to each vehicle damage image and the vehicle component identification result.
Specifically, the vehicle damage identification result and the vehicle component identification result can be respectively converted into matrix form representations to obtain a damage matrix and a component matrix, then the hadamard product of the damage matrix and the component matrix is calculated to obtain a damaged component comparison matrix, and finally the damaged component comparison matrix is utilized to determine vehicle damage information, wherein the vehicle damage information comprises damaged components, damage types and damage area ratios.
Specifically, in the process of determining the vehicle damage information by using the damaged component comparison matrix, whether the damaged component comparison matrix is zero or not can be judged by traversing the damaged matrix and the component matrix; if so, determining that the damage does not occur in the corresponding vehicle component area; otherwise, judging that the damage occurs in the corresponding vehicle component area, further summarizing the vehicle damage identification result and the vehicle component identification result of the damage occurring in the corresponding vehicle component area, and obtaining vehicle damage information.
208. And cleaning the vehicle damage information corresponding to each vehicle damage image according to a preset logic rule, and deleting and/or correcting the vehicle damage information which does not meet the damage judgment condition to obtain the cleaned vehicle damage information of all the images.
209. And integrating the cleaned vehicle damage information of all the images and outputting a final vehicle damage assessment result.
In the embodiment of the present invention, the whole process of performing vehicle damage information mainly includes two aspects, on one hand, vehicle damage information of the same damage type under the same component can be integrated according to the damage area ratio, and one piece of vehicle damage information with the largest damage area ratio of the same damage type under the same component is retained, and on the other hand, vehicle damage information of different damage types under the same component can be integrated according to a pre-established damage type severity ranking, vehicle damage information with the highest severity of the damage type under the same component is retained, then vehicle damage information of all components is summarized, and a final vehicle damage assessment result is output.
In an actual application scene, a vehicle damage assessment process is executed by combining eight modules, namely an image importing module, an image preprocessing module, a damage identification module, a component identification module, a damage comparison module, a result correction module, a result integration module and a result output module, and is specifically realized by combining the steps S10-S80 in the figure 3a, aiming at the image importing module, all vehicle images to be damaged of a single case can be imported, a group of images shot from a plurality of directions and far and near degrees are included, aiming at the image preprocessing module, preliminary analysis processing can be carried out on colors, shadows and the like of the images to be damaged can be analyzed, accurate damage assessment is facilitated, aiming at the damage identification module, the damage type and the position information of a vehicle in each image are judged on the basis of a damage identification model which is trained to achieve the best effect in advance, aiming at the component identification module, the method comprises the steps of judging appearance parts and position information of a vehicle in each image based on a pre-trained part identification model with the best effect, aiming at a damage comparison module, judging appearance parts with damage, aiming at a result correction module, deleting unreasonable identification results in a case based on an expert experience model, correcting azimuth identification errors of the parts, aiming at a result integration module, integrating all results of the case based on a damage priority library established based on damage severity, keeping the most serious damage on each part, and aiming at a result output module, outputting damage results (damaged parts + damage types + damage area ratio) under the case.
Further, for an image preprocessing module, the implementation process is as shown in steps S21-S24 in fig. 3b, the image preprocessing module performs a series of picture processing on all images in the case to obtain a picture set that is easier to be accurately identified by the image damage assessment process, and the implementation steps include: the method comprises the steps of deleting photos which do not meet shooting requirements, such as fuzzy photos, ultra-close-range photos, out-of-focus photos and the like, then adjusting the luminosity of the photos to reduce the brightness of images with high exposure, designing a reflective area detection and restoration algorithm built based on an OpenCV (open cell vehicle vision library) for a reflective area in the images with high light, realizing the darkness adjustment of the reflective area, avoiding the recognition interference caused by the high light, further adjusting the size and pixels of the photos, for example, carrying out scale compression conversion on the photos, and adjusting non-positive angles of the images, for example, adjusting the shooting positions of the photos to be non-positive, and automatically adjusting parallel visual angles.
Further, the specific implementation process of the damage identification module is as in steps S31-S33 in fig. 3c, and the damage identification module identifies the type of the damage in the image to be damaged and determines the damaged area based on the damage identification model with the best effect trained in advance, and the specific implementation steps include: the method comprises the steps of firstly collecting historical case images of accident vehicles, covering common passenger vehicle types such as cars, SUVs and MPVs on the market, simultaneously enabling the images under the vehicle types to comprise various images shot from a plurality of angles such as front, back, left and right angles and different distances from the vehicles, fitting the images shot by damage assessment personnel in the actual damage assessment process as far as possible, establishing a damage characteristic illustration picture for the collected vehicle images, and carrying out manual marking on appearance damage appearing in the images in a polygonal pixel-level semantic segmentation mode according to the illustration picture. The damage types comprise 13 types of scratch, crack, corner deformation, non-corner deformation and the like, then the marked damage data set is divided into a training set, a testing set and a verification set for building a vehicle damage model, the neural network model is segmented based on the semantics of the Pythroch frame, and the recognition of the damage types in the image and the judgment of the specific edge position of each damage type are realized.
Further, the specific implementation process of the component recognition module is as shown in steps S41-S44 in fig. 3d, and the component recognition module implements recognition of the appearance component and determination of the component region in the image to be damaged based on the pre-trained component recognition model with the best effect, and the specific implementation steps include: firstly, as in step S31 in step 3c, establishing feature illustration graphs of the same part under different vehicle types for collected vehicle images, manually marking all appearance parts appearing in the images in a polygonal pixel level semantic segmentation mode according to the illustration graphs, dividing the part types by combining the appearance part names in left, right, front and rear four directions, including 129 types of front left doors, rear left doors, front left headlights, front left fog lights, front middle nets, front middle net decoration strips, front middle net logos and the like, the accurate distinguishing of each appearance part of the car body can be realized, the type division is detailed, the marked part data set is divided into a training set, a testing set and a verification set, the method is used for building a vehicle component recognition model, and based on a semantic segmentation neural network model of a Pythrch frame, the trained model can realize recognition of component types in images and judgment of specific edge positions of each component. It should be noted that, because some vehicle appearance parts have an inclusion or covering relationship (for example, the right front door includes a right front handle, and the right rearview mirror shields some of the right front door, etc.), based on the part region information identified by the part model and the pre-established part distribution relationship table, the re-matting of the shielded part or the included part is completed by comparing the coordinates of each identified part region, and the accurate output of the part identification result is realized (for example, the right front handle is scratched off in the right front door part, and the position information of the right front door is accurately output).
Further, the specific implementation process of the damage comparison module is as shown in steps S51-S54 in fig. 3e, and the damage comparison module implements determination of the appearance component of the identified damage result, which can be implemented by using a matrix hadamard product calculation method, and has high speed and high precision. The specific implementation steps comprise: firstly, importing damage types recognized by a damage recognition model for each image in a case and position information of each damage, presenting the position information in a matrix form, importing component types recognized by a component recognition model for each image in the case and the position information of each component, presenting the position information in a matrix form, respectively calculating Hadamard products of pixel matrixes for all damaged areas recognized by each image in the case and all component areas in the same image, if the matrixes are nonzero, indicating that the damage occurs to the component, and if the matrixes are zero matrixes, indicating that the damage does not occur to the component, thereby confirming the component where each damage occurs, and finally outputting damage comparison results of all images in the same case.
Further, the specific implementation process of the result correction module is as shown in steps S61-S63 in fig. 3f, and the result correction module imports the damage result identified by the model into an expert experience model, so as to eliminate and correct the unreasonable damage in the case and output a reasonable damage result. The expert experience model is based on the loss assessment experience of claims experts in the automobile insurance industry, meanwhile, thousands of logic rules are set up by combining the self characteristics of the loss and part semantic segmentation neural network model, the contents are detailed, and the accuracy of the loss assessment result of the image can be guaranteed to the maximum extent. The specific implementation steps comprise: and aiming at the damage elimination condition, eliminating the specific damage which should not occur to the specific part by combining the specific part which should only occur to the specific damage. For example, the damage of glass breakage can only occur in the glass parts such as the right front door glass, the right front door glass cannot deform due to the material particularity, and when the damage of deformation of the right front door glass is output by the image recognition algorithm, the damage is judged to be unreasonable by the logic library, and the result is removed. And correcting the position recognition error of the symmetrical component aiming at the correction condition of the confusable component. For example: based on a large number of vehicle claim settlement experiences, most traffic accidents only cause damage to one side component of the vehicle, if damage to 3 left side components (a left front fender, a left headlamp and a left front door) and 1 right side symmetric component (a right front door) exist in the damage result output, the direction recognition error of the image recognition algorithm can be determined, and the right front door damage in the accident is corrected to the damage of the left front door. Aiming at all appearance parts which can be identified by the part identification model and all appearance damages which can occur under all appearance parts, the logic library covers all possible situations, and the precision rate of damage results can be greatly improved through the correction of the multi-expert model damage logic library of the module.
Further, the result integration module is specifically implemented as steps S71-S74 in fig. 3g, and includes two aspects, that is, on one hand, the integration of the same damage type under the same component in all case results based on the damage area ratio size; and on the other hand, the integration of damage degree priority libraries established on the basis of repair and replacement for different damage types under the same part in case results. The specific implementation steps comprise: the same damage type under the same component reflects the damage information of the current component, so that the same recognition result in a single image in the case and the same recognition result in all images are deleted repeatedly, and the principle of retention is that the area ratio of the damage to the component is the largest. The integration mode can still embody more valuable damage information under the current component on the basis of deleting repeated damage. (for example, if the ratio of the scratch damage area A of the right front door in FIG. 1 to the right front door in FIG. 1 is 10% and the ratio of the scratch damage area B of the right front door in FIG. 1 to the right front door in FIG. 1 is 20%, the information of the scratch damage area B of the right front door in FIG. 1 is firstly retained, and if the ratio of the damage area C of the right front door in FIG. 2 to the right front door in FIG. 2 is 30%, the information of the scratch damage area C of the right front door in FIG. 2 is retained again until the area ratio of the same damage type under the same part in the case is retained to be the maximum); for different damage types under the same component, the repair scheme of the component can be judged as long as the most serious damage under the component is output. Therefore, the scheme is based on the actual repair and replacement scheme, a damage severity priority library on all parts is established, the priority library can realize the sequencing of all damage results, and the most severe damage result on the same part is output. Such an integration mode can embody the most valuable damage information under the case. (for example, if the repair and replacement mode required by the scratch damage of the right front vehicle door is paint spraying, and the repair and replacement mode required by the slight deformation of the right front vehicle door is sheet metal and paint spraying, the severity of the slight deformation of the right front vehicle door is higher than the scratch damage of the right front vehicle door, so when the scratch damage and the slight deformation occur to the right front vehicle door at the same time, the priority of the slight deformation is higher than the scratch damage, and the slight deformation of the right front vehicle door is reserved); and finally, outputting the most serious damage result on each part in a single case through the processing of the result integration module.
Further, as a specific implementation of the method in fig. 1-2, an embodiment of the present application provides a vehicle damage assessment apparatus, as shown in fig. 4, the apparatus includes: an acquisition unit 301, an identification unit 302, a determination unit 303, a cleaning unit 304, an integration unit 305.
The acquiring unit 301 may be configured to acquire vehicle image information, where the vehicle image information includes a plurality of vehicle damage images which are shot in multiple directions and preprocessed;
the identifying unit 302 may be configured to identify each vehicle damage image by using a pre-trained identification model, so as to obtain a vehicle damage identification result and a vehicle component identification result corresponding to each vehicle damage image, where the vehicle damage identification result includes a damage type and a damage area of a vehicle, and the vehicle component identification result includes a component type and a component area of the vehicle;
the determining unit 303 may be configured to determine vehicle damage information corresponding to each vehicle damage image according to a vehicle damage identification result corresponding to each vehicle damage image and the vehicle component identification result, where the vehicle damage information includes a damaged component, a damage type, and a damaged area ratio;
the cleaning unit 304 may be configured to clean the vehicle damage information corresponding to each vehicle damage image according to a preset logic rule, and delete and/or correct the vehicle damage information that does not meet the damage determination condition, so as to obtain the cleaned vehicle damage information of all the images;
the integration unit 305 may be configured to integrate the washed vehicle damage information of all the images and output a final vehicle damage assessment result.
Compared with the mode of judging the vehicle damage image by depending on an image recognition algorithm in the existing mode, the vehicle damage assessment device provided by the embodiment of the invention realizes standardized input of images under different shooting conditions by acquiring the vehicle image information which comprises a plurality of vehicle damage images shot in multiple directions and preprocessed, identifies each vehicle damage image by utilizing a pre-trained identification model to obtain the vehicle damage identification result and the vehicle component identification result corresponding to each vehicle damage image so as to identify the vehicle damage image at a pixel level, determines the vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result and the vehicle component identification result, and cleans the vehicle damage information corresponding to each vehicle damage image according to the preset logic rule, wherein the vehicle damage information comprises damage components, damage types and damage area ratios And finally, the vehicle damage information after being cleaned of all the images is integrated and processed to output a final vehicle damage assessment result.
In a specific application scenario, as shown in fig. 5, the apparatus further includes:
the processing unit 306 may be configured to, after the vehicle image information is obtained, delete and/or adjust the vehicle damage image that does not meet the shooting condition by preprocessing the vehicle damage image shot in multiple directions.
In a specific application scenario, as shown in fig. 5, the recognition model includes a damage recognition model and a component recognition model, and the apparatus further includes:
the collecting unit 307 may be configured to collect historical case images of different accident vehicles in advance before the pre-trained recognition model is used to recognize each vehicle damage image and obtain a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image;
the labeling unit 308 can be used for labeling the vehicle appearance damage and the vehicle appearance component appearing in the historical case image in the form of a characteristic example graph to form vehicle damage labeling data and vehicle component labeling data;
the training unit 309 may be configured to input the historical case images carrying the vehicle damage labeling data and the vehicle component labeling data as sample images into a semantic segmentation model based on a deep learning framework for training, and construct a damage identification model and a component identification model, where the damage identification model is used to identify the vehicle damage images in combination with the vehicle damage labeling data, and the component identification model is used to identify the vehicle damage images in combination with the vehicle component labeling data.
In a specific application scenario, the labeling unit 308 may be specifically configured to label the damage type and the damage region of the appearance damage appearing in the historical case image in a polygonal pixel semantic segmentation manner, so as to form vehicle damage labeling data;
the labeling unit 308 may be further configured to label a component type and a component area of the vehicle component appearing in the historical case image in a polygonal pixel semantic segmentation manner, so as to form vehicle component labeling data.
In a specific application scenario, as shown in fig. 5, the apparatus further includes:
a comparing unit 310, configured to compare coordinates of an area position of each component in the vehicle component identification result with a pre-established vehicle appearance component distribution relation table after the pre-trained identification model is used to identify each vehicle damage image to obtain a vehicle damage identification result and a vehicle component identification result corresponding to each vehicle damage image, where the area relation information of vehicle component distribution is recorded in the component distribution relation table;
the matting unit 311 may be configured to perform region matting processing on a part covered by another part if the coordinate comparison result represents that the part has a covering relationship.
In a specific application scenario, as shown in fig. 5, the determining unit 303 includes:
a conversion module 3031, configured to convert the vehicle damage identification result and the vehicle component identification result into matrix representations respectively, so as to obtain a damage matrix and a component matrix;
a calculating module 3032, configured to calculate a hadamard product of the damage matrix and the component matrix to obtain a damaged component comparison matrix;
the determining module 3033 may be configured to determine vehicle damage information by using the damaged component comparison matrix, where the vehicle damage information includes a damaged component, a damage type, and a damaged area ratio.
In a specific application scenario, as shown in fig. 5, the determining module 3033 includes:
a determining submodule 30331, configured to traverse the damaged matrix and the component matrix, and determine whether the damaged component comparison matrix is zero; if so, determining that the damage does not occur in the corresponding vehicle component area; otherwise, judging that the damage occurs in the corresponding vehicle component area;
the summarizing submodule 30332 may be configured to summarize the vehicle damage identification result and the vehicle component identification result of the damage occurring in the corresponding vehicle component region, so as to obtain vehicle damage information.
In a specific application scenario, as shown in fig. 5, the integration unit 305 includes:
the first integrating module 3051 is configured to integrate, according to a damage area ratio, vehicle damage information of the same damage type for the same component, and retain a piece of vehicle damage information of the same damage type with the largest damage area ratio for the same component;
the second integrating module 3052 is configured to, for vehicle damage information of different damage types under the same component, integrate the vehicle damage information according to a pre-established damage type severity ranking, and retain the vehicle damage information with the highest damage type severity under the same component;
the summarizing module 3053 may be configured to summarize vehicle damage information of all components and output a final vehicle damage assessment result.
It should be noted that other corresponding descriptions of the functional units related to the vehicle damage assessment device applicable to the service end side provided in this embodiment may refer to the corresponding descriptions in fig. 1 and fig. 2, and are not repeated herein.
Based on the method shown in fig. 1-2, correspondingly, the embodiment of the present application further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the vehicle damage assessment method shown in fig. 1-2;
based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the method shown in fig. 1-2 and the virtual device embodiment shown in fig. 4-5, in order to achieve the above object, an embodiment of the present application further provides a server entity device, which may specifically be a computer, a server, or other network devices, and the entity device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program for implementing the vehicle damage assessment method as described above with reference to fig. 1-2.
Optionally, the above entity devices may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and the like. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be understood by those skilled in the art that the present embodiment provides a vehicle damage-determination physical device structure that does not constitute a limitation on the physical device, and may include more or fewer components, or some components in combination, or a different arrangement of components.
The storage medium may further include an operating system and a network communication module. The operating system is a program that manages hardware and software resources of the physical device of the above-described vehicle damage assessment, and supports the operation of the information processing program and other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and communication with other hardware and software in the information processing entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. Compared with the prior art, the method and the device have the advantages that the vehicle damage information corresponding to each vehicle damage image is determined according to the vehicle damage identification result and the vehicle component identification result, the vehicle damage information comprises damaged parts, damage types and damage area ratios, then the vehicle damage information corresponding to each vehicle damage image is cleaned according to a preset logic rule, the vehicle damage information which does not accord with damage judgment conditions is deleted and/or corrected to obtain the cleaned vehicle damage information of all the images, finally the cleaned vehicle damage information of all the images is integrated to output a final vehicle damage assessment result, the process makes up the defect that the vehicle damage assessment is carried out by directly using an image recognition algorithm, can integrate the vehicle damage information more reasonably, and improves the accuracy of the vehicle damage assessment result.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (11)

1. A method of vehicle damage assessment, comprising:
acquiring vehicle image information, wherein the vehicle image information comprises a plurality of vehicle damage images which are shot in multiple directions and subjected to preprocessing;
recognizing each vehicle damage image by using a pre-trained recognition model to obtain a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image, wherein the vehicle damage recognition result comprises a damage type and a damage area of a vehicle, and the vehicle component recognition result comprises a component type and a component area of the vehicle;
determining vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result corresponding to each vehicle damage image and the vehicle part identification result, wherein the vehicle damage information comprises a damaged part, a damage type and a damaged area ratio;
cleaning the vehicle damage information corresponding to each vehicle damage image according to a preset logic rule, and deleting and/or correcting the vehicle damage information which does not meet the damage judgment condition to obtain the cleaned vehicle damage information of all the images;
and integrating the cleaned vehicle damage information of all the images and outputting a final vehicle damage assessment result.
2. The method of claim 1, wherein after said obtaining vehicle image information, the method further comprises:
and deleting and/or adjusting the vehicle damage images which do not accord with the shooting conditions by preprocessing the vehicle damage images shot in multiple directions.
3. The method according to claim 1, wherein the recognition model comprises a damage recognition model and a component recognition model, and before the recognition of each vehicle damage image by using the pre-trained recognition model to obtain the vehicle damage recognition result and the vehicle component recognition result corresponding to each vehicle damage image, the method further comprises:
collecting historical case images of different accident vehicles in advance;
marking the vehicle appearance damage and the vehicle appearance component appearing in the historical case image in the form of a characteristic example graph to form vehicle damage marking data and vehicle component marking data;
respectively inputting historical case images carrying vehicle damage labeling data and vehicle component labeling data into a semantic segmentation model based on a deep learning framework as sample images for training, and constructing a damage identification model and a component identification model, wherein the damage identification model is used for identifying the vehicle damage images in combination with the vehicle damage labeling data, and the component identification model is used for identifying the vehicle damage images in combination with the vehicle component labeling data.
4. The method according to claim 3, wherein the vehicle appearance damage and the vehicle appearance component appearing in the historical case image are labeled in the form of a characteristic example graph to form vehicle damage labeling data and vehicle component labeling data, and the method specifically comprises the following steps:
marking the vehicle appearance damage appearing in the historical case image in a polygonal pixel semantic segmentation mode to form vehicle damage marking data;
and labeling the vehicle appearance component appearing in the historical case image in a polygonal pixel semantic segmentation mode to obtain component type and component area, and forming vehicle component labeling data.
5. The method according to claim 1, wherein after the identifying each vehicle damage image by using the pre-trained identification model to obtain the vehicle damage identification result and the vehicle component identification result corresponding to each vehicle damage image, the method further comprises:
comparing the coordinates of the area position of each part in the vehicle part identification result by using a pre-established vehicle appearance part distribution relation table, wherein the area relation information of vehicle part distribution is recorded in the part distribution relation table;
and if the coordinate comparison result represents that the part has a covering relationship, carrying out region matting processing on the part covered by other parts.
6. The method according to claim 1, wherein the determining the vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result and the vehicle component identification result corresponding to each vehicle damage image specifically comprises:
respectively converting the vehicle damage identification result and the vehicle component identification result into matrix form representations to obtain a damage matrix and a component matrix;
calculating a Hadamard product of the damage matrix and the component matrix to obtain a damaged component comparison matrix;
and determining vehicle damage information by utilizing the damaged component comparison matrix, wherein the vehicle damage information comprises damaged components, damage types and damage area ratios.
7. The method according to claim 6, wherein the determining vehicle damage information using the damaged component comparison matrix specifically comprises:
traversing the damage matrix and the component matrix, and judging whether the damaged component comparison matrix is zero or not; if so, determining that the damage does not occur in the corresponding vehicle component area; otherwise, judging that the damage occurs in the corresponding vehicle component area;
and summarizing the vehicle damage identification result of the damage occurring in the corresponding vehicle component area and the vehicle component identification result to obtain vehicle damage information.
8. The method according to any one of claims 1 to 7, wherein the integrating the washed vehicle damage information of all the images and outputting a final vehicle damage assessment result specifically comprises:
integrating vehicle damage information of the same damage type under the same component according to the damage area ratio, and reserving the vehicle damage information of the same damage type under the same component with the largest damage area ratio;
aiming at vehicle damage information of different damage types under the same component, integrating the vehicle damage information according to a pre-established damage type severity sequence, and reserving the vehicle damage information with the highest damage type severity under the same component;
and summarizing the vehicle damage information of all the components and outputting a final vehicle damage assessment result.
9. A vehicle damage assessment device, comprising:
the vehicle damage detection device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring vehicle image information which comprises a plurality of vehicle damage images shot in multiple directions and subjected to preprocessing;
the vehicle damage recognition system comprises a recognition unit, a processing unit and a processing unit, wherein the recognition unit is used for recognizing each vehicle damage image by using a pre-trained recognition model to obtain a vehicle damage recognition result and a vehicle component recognition result corresponding to each vehicle damage image, the vehicle damage recognition result comprises a damage type and a damage area of a vehicle, and the vehicle component recognition result comprises a component type and a component area of the vehicle;
the determining unit is used for determining vehicle damage information corresponding to each vehicle damage image according to the vehicle damage identification result corresponding to each vehicle damage image and the vehicle component identification result, and the vehicle damage information comprises a damaged component, a damage type and a damaged area ratio;
the cleaning unit is used for cleaning the vehicle damage information corresponding to each vehicle damage image according to a preset logic rule, and deleting and/or correcting the vehicle damage information which does not meet the damage judgment condition to obtain the cleaned vehicle damage information of all the images;
and the integration unit is used for integrating the cleaned vehicle damage information of all the images and outputting a final vehicle damage assessment result.
10. A storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the vehicle damage assessment method according to any one of claims 1 to 8.
11. A vehicle damage assessment device comprising a storage medium, a processor and a computer program stored on the storage medium and operable on the processor, wherein the processor when executing said program implements the vehicle damage assessment method according to any one of claims 1 to 8.
CN202110860166.0A 2021-07-28 2021-07-28 Vehicle damage assessment method, device and equipment Active CN113705351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110860166.0A CN113705351B (en) 2021-07-28 2021-07-28 Vehicle damage assessment method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110860166.0A CN113705351B (en) 2021-07-28 2021-07-28 Vehicle damage assessment method, device and equipment

Publications (2)

Publication Number Publication Date
CN113705351A true CN113705351A (en) 2021-11-26
CN113705351B CN113705351B (en) 2024-05-14

Family

ID=78650880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110860166.0A Active CN113705351B (en) 2021-07-28 2021-07-28 Vehicle damage assessment method, device and equipment

Country Status (1)

Country Link
CN (1) CN113705351B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241398A (en) * 2022-02-23 2022-03-25 深圳壹账通科技服务有限公司 Vehicle damage assessment method, device, equipment and storage medium based on artificial intelligence
CN115115611A (en) * 2022-07-21 2022-09-27 明觉科技(北京)有限公司 Vehicle damage identification method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403424A (en) * 2017-04-11 2017-11-28 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN109325531A (en) * 2018-09-17 2019-02-12 平安科技(深圳)有限公司 Car damage identification method, device, equipment and storage medium based on image
CN111488875A (en) * 2020-06-24 2020-08-04 爱保科技有限公司 Vehicle insurance claim settlement loss checking method and device based on image recognition and electronic equipment
CN111612104A (en) * 2020-06-30 2020-09-01 爱保科技有限公司 Vehicle loss assessment image acquisition method, device, medium and electronic equipment
CN112818811A (en) * 2021-01-27 2021-05-18 北京巅峰科技有限公司 Vehicle damage assessment method and device
CN113706315A (en) * 2021-07-28 2021-11-26 中国银行保险信息技术管理有限公司 Vehicle damage assessment method, device and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403424A (en) * 2017-04-11 2017-11-28 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN112435215A (en) * 2017-04-11 2021-03-02 创新先进技术有限公司 Vehicle loss assessment method based on image, mobile terminal and server
CN109325531A (en) * 2018-09-17 2019-02-12 平安科技(深圳)有限公司 Car damage identification method, device, equipment and storage medium based on image
CN111488875A (en) * 2020-06-24 2020-08-04 爱保科技有限公司 Vehicle insurance claim settlement loss checking method and device based on image recognition and electronic equipment
CN111612104A (en) * 2020-06-30 2020-09-01 爱保科技有限公司 Vehicle loss assessment image acquisition method, device, medium and electronic equipment
CN112818811A (en) * 2021-01-27 2021-05-18 北京巅峰科技有限公司 Vehicle damage assessment method and device
CN113706315A (en) * 2021-07-28 2021-11-26 中国银行保险信息技术管理有限公司 Vehicle damage assessment method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241398A (en) * 2022-02-23 2022-03-25 深圳壹账通科技服务有限公司 Vehicle damage assessment method, device, equipment and storage medium based on artificial intelligence
CN115115611A (en) * 2022-07-21 2022-09-27 明觉科技(北京)有限公司 Vehicle damage identification method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113705351B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
US11887064B2 (en) Deep learning-based system and method for automatically determining degree of damage to each area of vehicle
CN113705351A (en) Vehicle damage assessment method, device and equipment
CN111311540A (en) Vehicle damage assessment method and device, computer equipment and storage medium
US11410287B2 (en) System and method for artificial intelligence based determination of damage to physical structures
CN110738644A (en) automobile coating surface defect detection method and system based on deep learning
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
CN111739012A (en) Camera module white spot detecting system based on turntable
CN110706224A (en) Optical element weak scratch detection method, system and device based on dark field image
CN115018797A (en) Screen defect detection method, screen defect detection device and computer-readable storage medium
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN110705553B (en) Scratch detection method suitable for vehicle distant view image
CN113283439B (en) Intelligent counting method, device and system based on image recognition
CN117455917B (en) Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method
Ding et al. Restoration of single sand-dust image based on style transformation and unsupervised adversarial learning
CN110969135B (en) Vehicle logo recognition method in natural scene
CN113096062A (en) Gaussian image quality analysis tool and method of operation
CN111401341A (en) Deceleration strip detection method and device based on vision and storage medium thereof
CN116758425A (en) Automatic acceptance checking method and device for large-base photovoltaic power station
CN112750113B (en) Glass bottle defect detection method and device based on deep learning and linear detection
CN116524725A (en) Intelligent driving traffic sign image data identification system
CN116403200A (en) License plate real-time identification system based on hardware acceleration
US20220036132A1 (en) Semantic image segmentation for cognitive analysis of physical structures
CN115272284A (en) Power transmission line defect identification method based on image quality evaluation
CN115984197A (en) Defect detection method based on standard PCB image and related device
CN113191997B (en) Clamp spring measuring method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant