CN114842198A - Intelligent loss assessment method, device and equipment for vehicle and storage medium - Google Patents

Intelligent loss assessment method, device and equipment for vehicle and storage medium Download PDF

Info

Publication number
CN114842198A
CN114842198A CN202210606939.7A CN202210606939A CN114842198A CN 114842198 A CN114842198 A CN 114842198A CN 202210606939 A CN202210606939 A CN 202210606939A CN 114842198 A CN114842198 A CN 114842198A
Authority
CN
China
Prior art keywords
vehicle
segmentation
damage
map
detail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210606939.7A
Other languages
Chinese (zh)
Inventor
童新宇
刘莉红
刘玉宇
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210606939.7A priority Critical patent/CN114842198A/en
Publication of CN114842198A publication Critical patent/CN114842198A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The application relates to the technical field of artificial intelligence, and provides a vehicle intelligent damage assessment method, device, equipment and storage medium, wherein the method comprises the following steps: inputting a vehicle damage assessment picture to be recognized into a pre-trained semantic segmentation network for segmentation to obtain a gray-scale image and a pixel value of each pixel in the gray-scale image; classifying according to the pixel values to obtain an initial vehicle part list and a segmentation map corresponding to each vehicle part consisting of the same pixel values; matching the segmentation maps with detail maps in a pre-constructed detail map feature library one by one; for the matched segmentation maps, replacing the component information corresponding to the segmentation maps in the initial vehicle component list with the component information of the matched detail maps, and not modifying the unmatched segmentation maps to obtain a final vehicle component list; and outputting the vehicle maintenance scheme according to the final vehicle component list. The invention can reduce the condition of wrong matching of vehicle parts and improve the matching accuracy.

Description

Intelligent loss assessment method, device, equipment and storage medium for vehicle
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a storage medium for vehicle intelligent damage assessment.
Background
With the rapid development of Chinese economy, the reserved quantity of private cars rises year by year, the number of vehicle damage events caused by traffic accidents rises year by year, after the traffic accidents happen to the vehicles, certain parts of the vehicles can leave damage traces such as damages, scratches and the like, the vehicle damage claim settlement scheme generally needs to adopt a semantic segmentation algorithm to segment the parts of vehicle appearance pieces in a picture, and a vehicle part segmentation module in an original vehicle damage claim settlement scheme is easy to cause part identification direction recognition errors on a detailed picture, for example, a right rear fender is recognized as a left front fender, so that the error output of a maintenance result is influenced. The original claim settlement scheme cannot be accurately damaged for the detail drawing. For a partial detail view, such as a vehicle door without a door handle, a partial fender and a vehicle door, the direction of the segmentation model is easy to identify errors. The loss assessment error caused by the above conditions can greatly reduce the accuracy of the loss assessment picture identification, and the satisfaction degree of the car owner or the customer can be reduced while the cost loss of an insurance company can be caused.
Disclosure of Invention
The application provides a vehicle intelligent damage assessment method, device, equipment and storage medium, and aims to solve the problem of low vehicle component identification accuracy rate in vehicle damage assessment.
In order to solve the technical problem, the application adopts a technical scheme that: the intelligent damage assessment method for the vehicle comprises the following steps: inputting a vehicle damage assessment picture to be recognized into a pre-trained semantic segmentation network for segmentation to obtain a gray-scale image and a pixel value of each pixel in the gray-scale image;
classifying according to the pixel value of each pixel in the gray-scale image to obtain an initial vehicle component list and a segmentation image corresponding to each vehicle component consisting of the same pixel value;
matching the segmentation maps with detail maps in a pre-constructed detail map feature library one by one, wherein the detail map feature library stores the detail maps and corresponding component information;
according to the matching result, replacing the part information corresponding to the segmentation map in the initial vehicle part list with the part information of the matched detail map for the matched segmentation map, and not modifying the unmatched segmentation map to obtain a final vehicle part list;
and outputting the vehicle maintenance scheme according to the final vehicle component list.
As a further improvement of the application, a detail map feature library is established, and comprises the following steps:
acquiring multi-angle shot pictures of various vehicle types;
cutting and generating detailed diagrams of each vehicle part of various vehicle types according to the position of the boundary line of the vehicle part, and acquiring vehicle part information corresponding to each detailed diagram;
extracting feature vectors of the detail map by using a semantic segmentation network;
and storing the feature vectors and the corresponding vehicle component information in pairs to obtain a detail map feature library.
As a further improvement of the present application, after the target detection network feature vector and the corresponding vehicle component information are stored in pairs to obtain the detail map feature library, the method includes:
extracting a feature vector to be matched of each segmentation graph by using a semantic segmentation network;
and matching each feature vector to be matched with the feature vector of each detail drawing in the detail drawing feature library respectively to determine whether the feature vector matched with the feature vector to be matched exists in the detail drawing feature library.
As a further improvement of the application, the segmentation graph is matched with detail graphs in a detail graph feature library constructed in advance one by one, and the method comprises the following steps:
screening target segmentation graphs with the area size exceeding a preset area threshold value from all segmentation graphs;
extracting feature information from the target segmentation graph by using a semantic segmentation network, and determining whether feature information corresponding to preset mark information exists or not;
and if so, matching the target segmentation graph with the detail graphs in the detail graph feature library one by one.
As a further refinement of the present application, outputting a vehicle repair scenario from the final vehicle parts list includes: inputting the vehicle damage assessment picture into a pre-trained target detection network to obtain a damage position and a vehicle damage category;
confirming a maintenance mode corresponding to the vehicle damage category based on a preset maintenance rule;
generating a vehicle maintenance scheme according to the damage position, the maintenance mode and the vehicle component information in the final vehicle component list;
and outputting the vehicle maintenance scheme.
As a further improvement of the application, before outputting the vehicle maintenance scheme, the method further comprises the following steps:
generating a rectangular frame on the gray scale image according to the damage position;
acquiring a target pixel value of a pixel of a central point coordinate of the rectangular frame;
calculating first areas of all pixels with the same size as the target pixel value in the rectangular frame, and calculating second areas of all pixels with the same size as the target pixel value in the gray-scale image;
and when the ratio of the first area to the second area exceeds a preset area ratio threshold, upgrading the maintenance scheme according to a preset rule.
As a further improvement of the present application, training a target detection network includes:
inputting the damage sample image into a target detection network containing a first parameter, extracting damage features in the damage sample image through the target detection network and generating an intermediate convolution feature map;
inputting the intermediate convolution feature map into a mask prediction branch model containing a second parameter;
inputting all damage label types, all rectangular frame areas, all sample damage types and all sample damage rectangular areas of the damage sample image into a first loss model to obtain a first loss value, and simultaneously inputting all damage label types, all mask labeling graphs, all mask damage types and all mask tensor graphs of the damage sample image into a second loss model to obtain a second loss value;
determining a total loss value according to the first loss value and the second loss value;
and when the total loss value does not reach the preset convergence condition, iteratively updating a first parameter of the target detection network and a second parameter of the mask prediction branch model until the total loss value reaches the preset convergence condition, and recording the converged target detection network as the trained target detection network.
In order to solve the above technical problem, another technical solution adopted by the present application is: the utility model provides a vehicle intelligence is decided and is decreased device includes: the segmentation module is used for inputting a vehicle damage assessment picture to be identified into a pre-trained semantic segmentation network for segmentation to obtain a gray image and a pixel value of each pixel in the gray image;
the classification module is used for classifying according to the pixel value of each pixel in the gray-scale image to obtain an initial vehicle component list and a segmentation image corresponding to each vehicle component consisting of the same pixel value;
the matching module is used for matching the segmentation maps with detail maps in a pre-constructed detail map feature library one by one, and the detail map feature library stores the detail maps and corresponding component information;
the correction module is used for replacing the part information corresponding to the segmentation map in the initial vehicle part list with the part information of the matched detail map for the matched segmentation map according to the matching result, and not modifying the unmatched segmentation map to obtain a final vehicle part list;
and the generating module outputs a vehicle maintenance scheme according to the final vehicle component list.
In order to solve the above technical problem, the present application adopts another technical solution that: there is provided a computer device comprising a processor, a memory coupled to the processor, the memory having stored therein program instructions that, when executed by the processor, cause the processor to perform the steps of the vehicle intelligent damage assessment method of any one of the above.
In order to solve the above technical problem, the present application adopts another technical solution that: there is provided a storage medium storing program instructions for execution by a processor to implement the vehicle intelligent damage assessment method of any one of the above.
The beneficial effect of this application is: the vehicle intelligent damage assessment method of the application obtains the segmentation graph of each vehicle component and the initial vehicle component list corresponding to the name of each vehicle component by segmenting and identifying the vehicle damage assessment picture, matches the segmentation graph with the detail graphs in the pre-constructed detail graph feature library one by one, for the matched segmentation maps, replacing the part information corresponding to the segmentation maps in the initial vehicle part list by the part information of the matched detail maps, and the unmatched segmentation maps are not modified to obtain a final vehicle component list, and finally a vehicle maintenance scheme is output according to the final vehicle component list, which matches the segmentation map of the vehicle component with the detail map in the detail map feature library to confirm whether the identified segmentation map is accurate, therefore, the identification accuracy of the damaged vehicle parts in the vehicle damage assessment picture is improved, and an accurate vehicle maintenance scheme is output.
Drawings
FIG. 1 is a schematic flow chart of a vehicle intelligent damage assessment method according to an embodiment of the present invention;
FIG. 2 is a flow diagram illustrating one embodiment of creating a detail view feature library;
FIG. 3 is a flow diagram illustrating another embodiment of the present invention for creating a detail view feature library;
FIG. 4 is a flowchart illustrating a specific step S3 in FIG. 1;
FIG. 5 is a flowchart illustrating a specific step S5 in FIG. 1;
FIG. 6 is another detailed flowchart of step S5 in FIG. 1;
FIG. 7 is a schematic flow chart of a target detection network training process of the intelligent damage assessment method for vehicles according to the embodiment of the invention;
FIG. 8 is a functional block diagram of an intelligent damage assessment device for a vehicle according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a computer device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a storage medium according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. All directional indications (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are only used to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indication is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Fig. 1 is a schematic flow chart of an intelligent damage assessment method for a vehicle according to an embodiment of the invention. It should be noted that the method of the present application is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. As shown in fig. 1, the method includes:
and step S1, inputting the vehicle damage assessment picture to be recognized into a pre-trained semantic segmentation network for segmentation to obtain a gray-scale image and a pixel value of each pixel in the gray-scale image.
It should be noted that the semantic segmentation network is the semantic segmentation network depeplab 3 +. The gray scale image obtained by segmenting the vehicle damage assessment image refers to an identification result of a semantic segmentation network, a pixel value of each pixel in the gray scale image is obtained, the pixel value of the gray scale image represents a category, and each pixel value in the gray scale image is classified, wherein if the pixel value is 1, the category is 1. The categories here represent different parts of the vehicle, such as doors, tires, rims, windows, etc.
And step 2, classifying according to the pixel value of each pixel in the gray-scale map to obtain an initial vehicle component list and a segmentation map corresponding to each vehicle component composed of the same pixel value.
It should be noted that, in the original vehicle damage claim settlement scheme, the vehicle component segmentation module is prone to cause a vehicle component direction identification error for the detail drawing, and if the right rear fender is identified as the left front fender, the error output of the maintenance result is further affected.
Specifically, classification is performed according to a pixel value of each pixel in a gray-scale image, and a segmentation map corresponding to each vehicle component composed of the same pixel value is used to obtain an initial vehicle component list, and first, categories appearing in the gray-scale image are counted to obtain a component list [1,2, 5.., N ], wherein numbers represent different vehicle components. There are many ways to count, for example: the initial vehicle component list is set to null, and the categories which do not exist in the initial vehicle component list are encountered along the wide and high traversal grayscale map, and then the categories are added into the initial vehicle component list.
And step S3, matching the segmentation maps with detail maps in a pre-constructed detail map feature library one by one, wherein the detail map feature library stores the detail maps and corresponding component information.
Specifically, the segmentation maps corresponding to each vehicle component composed of the same pixel values are matched with detail maps in a pre-constructed detail map feature library one by one, wherein the detail map feature library stores the detail maps and corresponding component information.
Further, as shown in fig. 2, a detail drawing feature library is established, which includes:
step S201, obtaining multi-angle shot pictures of various vehicle types.
Specifically, pictures of all vehicle models in the market and pictures shot at multiple angles are obtained.
Step S202, cutting and generating detailed diagrams of each vehicle part of various vehicle types according to the position of the boundary line of the vehicle part, and acquiring vehicle part information corresponding to each detailed diagram.
Specifically, according to the position of the boundary line of the vehicle parts, detail maps of the vehicle parts of various vehicle types are randomly cut and generated, part information and part direction information are recorded, and the vehicle part information corresponding to each detail map is acquired.
And step S203, extracting the feature vector of the detail map by using the semantic segmentation network.
Specifically, the detailed diagrams of the vehicle components are input into a semantic segmentation network, the last layer of feature diagram before classification is selected, and feature vectors of the detailed diagrams are extracted through global pooling operation, wherein the feature vectors can be of any dimensionality, such as: feature vectors [ d1, d2,.., d256] of 256 dimensions are extracted.
And step S204, storing the feature vectors and the corresponding vehicle component information in pairs to obtain a detail map feature library.
Specifically, the feature vectors and the corresponding vehicle part information are stored in pairs of [ value: key ] to obtain a detail map feature library.
Further, as shown in fig. 3, after step S204, the method further includes:
and S205, extracting the feature vector to be matched of each segmentation graph by using a semantic segmentation network.
Specifically, a semantic segmentation network is used for extracting a feature vector to be matched from each segmentation map.
And S206, matching each feature vector to be matched with the feature vector of each detail drawing in the detail drawing feature library respectively to confirm whether the feature vector matched with the feature vector to be matched exists in the detail drawing feature library.
Specifically, each feature vector to be matched is matched with the feature vector of each detail drawing in the detail drawing feature library respectively to determine whether the feature vector matched with the feature vector to be matched exists in the detail drawing feature library, and if so, the feature vector is extracted from the detail drawing.
Further, as shown in fig. 4, step S3 includes:
and S301, screening target segmentation maps with the area size exceeding a preset area threshold value from all the segmentation maps.
Specifically, an area threshold is preset, and a target segmentation map with an area size exceeding the preset area threshold is screened from all segmentation maps.
Step S302, extracting feature information from the target segmentation graph by using a semantic segmentation network, and confirming whether feature information corresponding to preset mark information exists.
Specifically, feature information is extracted from a target segmentation graph by using a semantic segmentation network, wherein the area size of the target segmentation graph exceeds a preset area threshold, and whether feature information corresponding to preset mark information exists is determined.
And step S303, if the target segmentation graph exists, matching the target segmentation graph with the detail graphs in the detail graph feature library one by one.
Specifically, feature information is extracted from the target segmentation graph by using a semantic segmentation network, whether feature information corresponding to preset mark information exists is judged, and if the feature information exists, the target segmentation graph is matched with detail graphs in a detail graph feature library one by one.
And step S4, replacing the part information corresponding to the division map in the initial vehicle part list with the part information of the matched detail map for the matched division map according to the matching result, and not modifying the unmatched division map to obtain a final vehicle part list.
Specifically, when the segmentation map is matched with the detail map in the pre-constructed detail map feature library, replacing the part information of the matched detail map with the part information corresponding to the segmentation map in the initial vehicle part list, and when the segmentation map is not matched with the detail map in the pre-constructed detail map feature library, not modifying the segmentation map which is not matched to obtain the final vehicle part list.
And step S5, outputting a vehicle maintenance scheme according to the final vehicle component list.
Specifically, the final vehicle component list is compared to the existing component list because the final vehicle component list is more accurate, and thus when the final vehicle component list is obtained, a corresponding vehicle repair scenario is output according to the final vehicle component list.
Further, as shown in fig. 5, step S5 includes:
step S501, inputting the vehicle damage assessment picture into a pre-trained target detection network to obtain a damage position and a vehicle damage category.
It should be noted that, inputting the vehicle damage assessment picture into the pre-trained target detection network is implemented based on mobile communication between a mobile terminal located at the near end and a vehicle intelligent damage assessment system located at the far end, where the mobile terminal may be a mobile phone, a tablet computer, or other terminal with an image capture function, for example, a camera is provided, and hereinafter, the mobile phone with a camera is taken as an example for description, and the vehicle intelligent damage assessment system may be deployed on a server or a pc (personal computer), or the like, and is set by a user as needed. The method comprises the steps of providing a vehicle intelligent damage assessment application, running on a mobile phone, shooting a vehicle damage assessment picture of a damaged part of an accident vehicle on an accident site through the mobile phone by a user, optionally providing a shooting guide interface in the vehicle intelligent damage assessment application, starting a camera after the user enters the shooting guide interface, displaying a floating frame with a preset size on the interface, completely accommodating the damaged part in the floating frame when the user shoots an image of the vehicle damage assessment picture, and shooting based on the floating frame to shoot a picture meeting requirements, so that the vehicle damage assessment picture of the damaged part of the vehicle is obtained, and the condition that the damaged part shot is incomplete to cause abnormal recognition of a vehicle damage assessment system is avoided.
Specifically, after a vehicle damage assessment picture of a damaged component is collected, a vehicle damage assessment request is generated based on the vehicle damage assessment picture and sent to a pre-trained target detection network, and it can be understood that the vehicle damage assessment request includes accident vehicle basic information, and the accident vehicle basic information includes a vehicle type, a license plate number, a vehicle identification code (VIN code), and the like. That is, the vehicle damage assessment request transmitted to the target detection network system trained in advance includes information such as the vehicle type, the damage position of the damaged component, and the vehicle damage type.
And step S502, confirming a maintenance mode corresponding to the vehicle damage type based on a preset maintenance rule.
Specifically, the intelligent vehicle damage assessment system obtains image information of a damaged component from a vehicle damage assessment request after receiving the vehicle damage assessment request, analyzes the obtained image of the damaged component of the vehicle based on an image recognition technology, or analyzes the image information by a user, and specifically, the intelligent vehicle damage assessment system determines a damage position and a vehicle damage category of the damaged component based on the image recognition technology or based on a selection operation of the user on the image information of the damaged component, determines a maintenance mode corresponding to the vehicle damage category according to the determined damage position and the vehicle damage category, and determines a maintenance mode corresponding to the vehicle damage category based on a preset maintenance rule.
And S503, generating a vehicle maintenance scheme according to the damaged position, the maintenance mode and the vehicle component information in the final vehicle component list.
Specifically, the obtained vehicle part information in the damage position, the maintenance mode and the final vehicle part list is used for generating a vehicle maintenance scheme.
And step S504, outputting a vehicle maintenance scheme.
Specifically, the obtained vehicle maintenance scheme is output from the vehicle intelligent damage assessment system.
Further, as shown in fig. 6, before step S504, the method further includes:
and step S505, generating a rectangular frame on the gray-scale map according to the damage position.
Specifically, a rectangular frame is generated on a gray scale map according to the damage position, and then coordinates (x + w/2, y + h/2) of the center point of the rectangular frame are generated according to coordinates [ x, y, w, h ] of the damage position, wherein x and y are coordinates of the upper left corner point of the rectangular frame, and w and h are width and height of the rectangular frame respectively.
Step S506, a target pixel value of the pixel of the center point coordinate of the rectangular frame is obtained.
Specifically, on the generated gray scale map, a target pixel value of a pixel of the center point coordinates of the rectangular frame, that is, a component category corresponding to the damage position is obtained. Different target pixel values represent different component classes.
Step S507, calculating a first area of all pixels in the rectangular frame having the same size as the target pixel value, and calculating a second area of all pixels in the grayscale map having the same size as the target pixel value.
Specifically, a rectangular frame and coordinates of a center point of the rectangular frame are generated on the gray scale map according to the damage position, a first area of all pixels in the rectangular frame having the same size as the target pixel value is calculated, and a second area of all pixels in the gray scale map having the same size as the target pixel value is calculated.
And step S508, when the ratio of the first area to the second area exceeds a preset area ratio threshold, upgrading the maintenance scheme according to a preset rule.
Specifically, when the ratio of the first area to the second area exceeds a preset area ratio threshold, the maintenance scheme is updated according to a preset rule. For example: if the default maintenance scheme corresponding to the damage category depression is small, if the ratio of the first area to the second area exceeds a preset area ratio threshold value, the maintenance scheme needs to be upgraded to be large. The logic rule of the upgrade is made according to the business logic in reality. The main logic of the scheme is to determine whether to upgrade according to the area occupation ratio of the damage in the part.
Further, as shown in fig. 7, the training target detection network includes:
step S601, inputting the damage sample image into a target detection network containing a first parameter, extracting damage features in the damage sample image through the target detection network, and generating an intermediate convolution feature map.
Specifically, the target detection network is a deep convolutional neural network model based on a YOLOV3 model for identifying a sample damage type and a sample damage rectangular region in a damage sample image, that is, a target detection network structure is the same as that of a YOLOV3 model, damage features are features of 7 damage types including scratch, dent, wrinkle, dead fold, tear, and deletion, and a first parameter of the target detection network may be set according to requirements, for example, the first parameter may obtain all parameters of the YOLOV3 model by a migration learning method, or may be set to a preset numerical value.
Step S602, inputting the intermediate convolution characteristic diagram into a mask prediction branch model containing a second parameter.
Specifically, the mask prediction branch model is a preset convolutional neural network model, and a second parameter of the mask prediction branch model may be set according to a requirement, for example, the second parameter is a random parameter value.
Step S603, inputting all damage label types, all rectangular frame areas, all sample damage types, and all sample damage rectangular areas of the damage sample image into the first loss model to obtain a first loss value, and inputting all damage label types, all mask annotation maps, all mask damage types, and all mask tensor maps of the damage sample image into the second loss model to obtain a second loss value.
Specifically, the first loss model comprises a first loss function, all damage label types, all rectangular frame areas, all sample damage types and all sample damage rectangular areas are input into the first loss function, and a first loss value is calculated through a cross entropy method; and the second loss value model comprises a second loss function, all the damage label types, all the mask label graphs, all the mask damage types and all the mask tensor graphs of the damage sample image are input into the second loss function, and a second loss value is calculated by a cross entropy method.
And step S604, determining a total loss value according to the first loss value and the second loss value.
Specifically, the first loss value and the second loss value are input into a loss model containing a total loss function, the total loss function in the loss model can be set according to requirements, the loss model is a model for generating the total loss value, and the total loss value is calculated through the total loss function.
And step S605, when the total loss value does not reach the preset convergence condition, iteratively updating a first parameter of the target detection network and a second parameter of the mask prediction branch model until the total loss value reaches the preset convergence condition, and recording the converged target detection network as the trained target detection network.
Specifically, the convergence condition may be a condition that the total loss value is small and does not decrease again after 9000 calculations, that is, when the total loss value is small and does not decrease again after 9000 calculations, the training is stopped, and the converged target detection network is recorded as a trained target detection network; the convergence condition may also be a condition that the total loss value is smaller than a set threshold, that is, when the total loss value is smaller than the set threshold, the training is stopped, and the target detection network after convergence is recorded as a target detection network after the training is completed. Therefore, when the total loss value does not reach the preset convergence condition, the first parameter of the iterative target detection network and the second parameter of the mask prediction branch model are continuously updated, so that accurate results can be continuously obtained, and the identification accuracy is higher and higher. When the total loss value reaches the preset convergence condition, the result that the total loss value has reached the optimum is shown, the target detection network is converged at the moment, and the converged target detection network is recorded as the trained target detection network.
The intelligent loss assessment method for the vehicle of the embodiment of the invention obtains the segmentation maps of all vehicle components and the initial vehicle component lists corresponding to the names of all vehicle components by segmenting and identifying the vehicle loss assessment images, matches the segmentation maps with the detail maps in the pre-constructed detail map feature library one by one, for the matched segmentation maps, replacing the part information corresponding to the segmentation maps in the initial vehicle part list by the part information of the matched detail maps, and the unmatched segmentation maps are not modified to obtain a final vehicle component list, and finally a vehicle maintenance scheme is output according to the final vehicle component list, which matches the segmentation map of the vehicle component with the detail map in the detail map feature library to confirm whether the identified segmentation map is accurate, therefore, the identification accuracy of the damaged vehicle parts in the vehicle damage assessment picture is improved, and an accurate vehicle maintenance scheme is output.
Fig. 8 is a functional module schematic diagram of the intelligent damage assessment device for the vehicle according to the embodiment of the present application. As shown in fig. 8, the sound event detection device 2 includes a segmentation module 21, a classification module 22, a matching module 23, a modification module 24, and a generation module 25.
The segmentation module 21 is configured to input the vehicle damage assessment picture to be identified into a pre-trained semantic segmentation network for segmentation, so as to obtain a grayscale image and a pixel value of each pixel in the grayscale image;
the classification module 22 is configured to classify the vehicle components according to the pixel values of each pixel in the grayscale map to obtain an initial vehicle component list and a segmentation map corresponding to each vehicle component composed of the same pixel values;
the matching module 23 is used for matching the segmentation maps with detail maps in a pre-constructed detail map feature library one by one, wherein the detail map feature library stores the detail maps and corresponding component information;
the correction module 24 is configured to, according to the matching result, replace the component information corresponding to the split map in the initial vehicle component list with the component information of the matched detailed map for the matched split map, and not modify the unmatched split map to obtain a final vehicle component list;
and a generating module 25 for outputting the vehicle maintenance plan according to the final vehicle component list.
Optionally, establishing a detail drawing feature library, including:
acquiring multi-angle shot pictures of various vehicle types;
cutting and generating detailed diagrams of each vehicle part of various vehicle types according to the position of the boundary line of the vehicle part, and acquiring vehicle part information corresponding to each detailed diagram;
extracting feature vectors of the detail map by using a semantic segmentation network;
and storing the feature vectors and the corresponding vehicle component information in pairs to obtain a detail map feature library.
Optionally, after the target detection network feature vector and the corresponding vehicle component information are stored in pair to obtain the detail map feature library, the method includes:
extracting a feature vector to be matched of each segmentation graph by using a semantic segmentation network;
and matching each feature vector to be matched with the feature vector of each detail drawing in the detail drawing feature library respectively to determine whether the feature vector matched with the feature vector to be matched exists in the detail drawing feature library.
Optionally, the matching module 23 performs an operation of matching the segmentation maps with the detail maps in the pre-constructed detail map feature library one by one, including:
screening target segmentation graphs with the area size exceeding a preset area threshold value from all segmentation graphs;
extracting feature information from the target segmentation graph by using a semantic segmentation network, and determining whether feature information corresponding to preset mark information exists or not;
and if so, matching the target segmentation graph with the detail graphs in the detail graph feature library one by one.
Optionally, the generating module 25 performs an operation of outputting a vehicle repair scenario according to the final vehicle component list, including: inputting the vehicle damage assessment picture into a pre-trained target detection network to obtain a damage position and a vehicle damage category;
confirming a maintenance mode corresponding to the vehicle damage category based on a preset maintenance rule;
generating a vehicle maintenance scheme according to the damage position, the maintenance mode and the vehicle component information in the final vehicle component list;
and outputting the vehicle maintenance scheme.
Optionally, before the generating module 25 performs the operation of outputting the vehicle maintenance plan, the method further includes:
generating a rectangular frame on the gray-scale image according to the damage position;
acquiring a target pixel value of a pixel of a central point coordinate of the rectangular frame;
calculating first areas of all pixels with the same size as the target pixel value in the rectangular frame, and calculating second areas of all pixels with the same size as the target pixel value in the gray-scale image;
and when the ratio of the first area to the second area exceeds a preset area ratio threshold, upgrading the maintenance scheme according to a preset rule.
Optionally, training the target detection network comprises:
inputting the damage sample image into a target detection network containing a first parameter, extracting damage features in the damage sample image through the target detection network and generating an intermediate convolution feature map;
inputting the intermediate convolution feature map into a mask prediction branch model containing a second parameter;
inputting all damage label types, all rectangular frame areas, all sample damage types and all sample damage rectangular areas of the damage sample image into a first loss model to obtain a first loss value, and simultaneously inputting all damage label types, all mask labeling graphs, all mask damage types and all mask tensor graphs of the damage sample image into a second loss model to obtain a second loss value;
determining a total loss value according to the first loss value and the second loss value;
and when the total loss value does not reach the preset convergence condition, iteratively updating a first parameter of the target detection network and a second parameter of the mask prediction branch model until the total loss value reaches the preset convergence condition, and recording the converged target detection network as the trained target detection network.
For other details of the technical solutions implemented by the modules in the vehicle intelligent damage assessment apparatus in the foregoing embodiments, reference may be made to the description of the vehicle intelligent damage assessment method in the foregoing embodiments, and details are not described here again.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the system-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment
Referring to fig. 9, fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 9, the computer device 30 includes a processor 31 and a memory 32 coupled to the processor 31.
The memory 32 stores program instructions that, when executed by the processor 31, cause the processor 31 to perform the steps of the vehicle intelligent damage assessment method in the above-described embodiment.
The processor 31 may also be referred to as a CPU (Central Processing Unit). The processor 31 may be an integrated circuit chip having signal processing capabilities. The processor 31 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a storage medium according to an embodiment of the present application. The storage medium of the embodiment of the present application stores program instructions 41, and the program instructions 41 are executed by the processor 31 to implement the vehicle intelligent damage assessment method in the above embodiment, where the program instructions 41 may be stored in the storage medium in the form of a software product, and include several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or various media capable of storing program codes, or a computer device such as a computer, a server, a mobile phone, or a tablet. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (10)

1. An intelligent loss assessment method for a vehicle, comprising:
inputting a vehicle damage assessment picture to be recognized into a pre-trained semantic segmentation network for segmentation to obtain a gray-scale image and a pixel value of each pixel in the gray-scale image;
classifying according to the pixel value of each pixel in the gray-scale image to obtain an initial vehicle component list and a segmentation image corresponding to each vehicle component consisting of the same pixel value;
matching the segmentation maps with detail maps in a pre-constructed detail map feature library one by one, wherein the detail map feature library stores detail maps and corresponding component information;
according to the matching result, replacing the part information corresponding to the segmentation map in the initial vehicle part list with the part information of the matched detail map for the matched segmentation map, and not modifying the unmatched segmentation map to obtain a final vehicle part list;
outputting a vehicle repair plan based on the final vehicle component list.
2. The vehicle intelligent damage assessment method of claim 1, wherein establishing said minutiae map signatures library comprises:
acquiring multi-angle shot pictures of various vehicle types;
cutting and generating detailed diagrams of each vehicle part of various vehicle types according to the position of the boundary line of the vehicle part, and acquiring vehicle part information corresponding to each detailed diagram;
extracting feature vectors of the detail map by using the semantic segmentation network;
and storing the feature vectors and the corresponding vehicle component information in pairs to obtain a detail map feature library.
3. The vehicle intelligent damage assessment method according to claim 2, wherein said storing said feature vectors in pairs with corresponding vehicle component information to obtain a minutiae map feature library, comprises:
extracting a feature vector to be matched of each segmentation graph by using the semantic segmentation network;
and matching each feature vector to be matched with the feature vector of each detail drawing in the detail drawing feature library respectively to determine whether the feature vector matched with the feature vector to be matched exists in the detail drawing feature library.
4. The vehicle intelligent damage assessment method according to claim 1, wherein said individually matching said segmentation maps with detail maps in a pre-constructed detail map feature library comprises:
screening target segmentation graphs with the area size exceeding a preset area threshold value from all the segmentation graphs;
extracting feature information from the target segmentation graph by using the semantic segmentation network, and determining whether feature information corresponding to preset mark information exists or not;
and if so, matching the target segmentation graph with the detail graphs in the detail graph feature library one by one.
5. The vehicle intelligent damage assessment method of claim 1, wherein said outputting a vehicle repair plan according to said final vehicle component list comprises: inputting the vehicle damage assessment picture into a pre-trained target detection network to obtain a damage position and a vehicle damage category;
confirming a maintenance mode corresponding to the vehicle damage category based on a preset maintenance rule;
generating a vehicle maintenance scheme according to the damage position, the maintenance mode and the vehicle component information in the final vehicle component list;
and outputting the vehicle maintenance scheme.
6. The vehicle intelligent damage assessment method of claim 5, wherein before outputting said vehicle repair schedule, further comprising:
generating a rectangular frame on the gray-scale image according to the damage position;
acquiring a target pixel value of a pixel of the central point coordinate of the rectangular frame;
calculating a first area of all pixels in the rectangular frame with the same size as the target pixel value, and calculating a second area of all pixels in the gray-scale map with the same size as the target pixel value;
and when the ratio of the first area to the second area exceeds a preset area ratio threshold, upgrading the maintenance scheme according to a preset rule.
7. The vehicle intelligent impairment assessment method of claim 5, wherein training the target detection network comprises:
inputting the damage sample image into a target detection network containing a first parameter, extracting damage features in the damage sample image through the target detection network and generating an intermediate convolution feature map;
inputting the intermediate convolution feature map into a mask prediction branch model containing a second parameter;
inputting all damage label types, all rectangular frame areas, all sample damage types and all sample damage rectangular areas of the damage sample image into a first loss model to obtain a first loss value, and simultaneously inputting all damage label types, all mask labeling graphs, all mask damage types and all mask tensor graphs of the damage sample image into a second loss model to obtain a second loss value;
determining a total loss value according to the first loss value and the second loss value;
when the total loss value does not reach a preset convergence condition, iteratively updating a first parameter of a target detection network and a second parameter of a mask prediction branch model, and recording the target detection network after convergence as a trained target detection network until the total loss value reaches the preset convergence condition.
8. The utility model provides a vehicle intelligence is decided and is decreased device which characterized in that, it includes:
the segmentation module is used for inputting a vehicle damage assessment picture to be identified into a pre-trained semantic segmentation network for segmentation to obtain a gray-scale image and a pixel value of each pixel in the gray-scale image;
the classification module is used for classifying according to the pixel value of each pixel in the gray-scale image to obtain an initial vehicle component list and a segmentation image corresponding to each vehicle component consisting of the same pixel value;
the matching module is used for matching the segmentation maps with detail maps in a pre-constructed detail map feature library one by one, and the detail map feature library stores detail maps and corresponding component information;
the correction module is used for replacing the part information corresponding to the segmentation map in the initial vehicle part list with the part information of the matched detail map for the matched segmentation map according to the matching result, and not modifying the unmatched segmentation map to obtain a final vehicle part list;
and the generating module outputs a vehicle maintenance scheme according to the final vehicle component list.
9. A computer device, characterized in that the computer device comprises a processor, a memory coupled to the processor, in which memory program instructions are stored, which program instructions, when executed by the processor, cause the processor to carry out the steps of the vehicle intelligence damage assessment method according to any of claims 1-7.
10. A storage medium characterized by storing program instructions capable of implementing the vehicle intelligent damage assessment method according to any one of claims 1-7.
CN202210606939.7A 2022-05-31 2022-05-31 Intelligent loss assessment method, device and equipment for vehicle and storage medium Pending CN114842198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210606939.7A CN114842198A (en) 2022-05-31 2022-05-31 Intelligent loss assessment method, device and equipment for vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210606939.7A CN114842198A (en) 2022-05-31 2022-05-31 Intelligent loss assessment method, device and equipment for vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN114842198A true CN114842198A (en) 2022-08-02

Family

ID=82572362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210606939.7A Pending CN114842198A (en) 2022-05-31 2022-05-31 Intelligent loss assessment method, device and equipment for vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN114842198A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116303830A (en) * 2023-03-15 2023-06-23 深圳开思时代科技有限公司 Matching method and system for parts to be repaired of automobile

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116303830A (en) * 2023-03-15 2023-06-23 深圳开思时代科技有限公司 Matching method and system for parts to be repaired of automobile
CN116303830B (en) * 2023-03-15 2023-10-17 深圳开思时代科技有限公司 Matching method and system for parts to be repaired of automobile

Similar Documents

Publication Publication Date Title
CN110569701B (en) Computer-implemented vehicle damage assessment method and device
CN107798299B (en) Bill information identification method, electronic device and readable storage medium
CN111209827B (en) Method and system for OCR (optical character recognition) bill problem based on feature detection
CN109740515B (en) Evaluation method and device
WO2020258077A1 (en) Pedestrian detection method and device
CN113239227B (en) Image data structuring method, device, electronic equipment and computer readable medium
CN111160395A (en) Image recognition method and device, electronic equipment and storage medium
CN110956081A (en) Method and device for identifying position relation between vehicle and traffic marking and storage medium
CN110781381A (en) Data verification method, device and equipment based on neural network and storage medium
CN110569856A (en) sample labeling method and device, and damage category identification method and device
CN112052845A (en) Image recognition method, device, equipment and storage medium
CN111858977B (en) Bill information acquisition method, device, computer equipment and storage medium
CN111415336A (en) Image tampering identification method and device, server and storage medium
CN111881958A (en) License plate classification recognition method, device, equipment and storage medium
CN114038004A (en) Certificate information extraction method, device, equipment and storage medium
CN114842198A (en) Intelligent loss assessment method, device and equipment for vehicle and storage medium
CN114419739A (en) Training method of behavior recognition model, behavior recognition method and equipment
CN112241736A (en) Text detection method and device
CN110660000A (en) Data prediction method, device, equipment and computer readable storage medium
CN114445716B (en) Key point detection method, key point detection device, computer device, medium, and program product
CN111062388A (en) Advertisement character recognition method, system, medium and device based on deep learning
CN116071557A (en) Long tail target detection method, computer readable storage medium and driving device
CN115311022A (en) Advertisement traffic identification method and device and computer readable storage medium
CN114927236A (en) Detection method and system for multiple target images
CN115311632A (en) Vehicle weight recognition method and device based on multiple cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination