WO2021143063A1 - 车辆定损方法、装置、计算机设备和存储介质 - Google Patents

车辆定损方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021143063A1
WO2021143063A1 PCT/CN2020/099268 CN2020099268W WO2021143063A1 WO 2021143063 A1 WO2021143063 A1 WO 2021143063A1 CN 2020099268 W CN2020099268 W CN 2020099268W WO 2021143063 A1 WO2021143063 A1 WO 2021143063A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
damage
damaged
vehicle
repair
Prior art date
Application number
PCT/CN2020/099268
Other languages
English (en)
French (fr)
Inventor
叶苑琼
赵亮
刘金萍
彭杉
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021143063A1 publication Critical patent/WO2021143063A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Definitions

  • This application relates to the field of predictive models, and in particular to a method, device, computer equipment, and storage medium for determining vehicle damage.
  • the inventor realizes that due to lack of experience, when car owners take photos by themselves, they often collect photos that cannot be assessed for damage. When the car owner retakes the photo, he has lost the best time to take the photo, which seriously affects the efficiency of damage assessment processing and the user experience of damage assessment service. In addition, car owners cannot judge the damage of the vehicle and the maintenance information required when taking photos.
  • the present application provides a method, a device, a computer device and a storage medium for determining the damage of a vehicle, which can realize the prediction of the damage of a damaged vehicle more accurately, and the prediction efficiency is higher.
  • the present application provides a method for determining the damage of a vehicle, and the method includes:
  • the present application also provides a vehicle damage assessment device, which includes:
  • An image acquisition module configured to acquire a car damage image uploaded by a terminal, and preprocess the car damage image to obtain an image to be predicted, the car damage image including the damaged part of the damaged vehicle photographed by the terminal;
  • a car damage prediction module configured to determine car damage information corresponding to the damaged vehicle according to the to-be-predicted image based on a car damage prediction model, where the car damage information includes damaged parts and repair categories;
  • the damage assessment generating module is used to obtain the repair information corresponding to the damaged part and the repair category, determine the damage assessment result of the damaged vehicle according to the repair information, and send the damage assessment result to the terminal.
  • this application also provides a computer device, the computer device including a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and implement the following steps when the computer program is executed:
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and when the program instructions are executed by a processor, they are used to implement The following steps:
  • This application discloses a vehicle damage assessment method, device, computer equipment and storage medium, which can ensure the quality of the vehicle damage image and improve the accuracy of the damage assessment result by preprocessing when obtaining the car damage image uploaded by the terminal;
  • the trained car damage prediction model performs car damage prediction on the predicted image, which can accurately obtain the car damage information corresponding to the damaged vehicle.
  • the prediction efficiency is higher and the damage assessment time is saved.
  • the damage information in the car damage information can be obtained.
  • the repair information corresponding to the damaged part and the repair category, and then the result of the damage is obtained, which not only solves the user's problem of damage, but also improves the user's experience.
  • FIG. 1 is a schematic flowchart of a method for determining vehicle damage provided by an embodiment of the present application
  • Fig. 2 is a schematic flow chart of the sub-steps of obtaining a car damage image and preprocessing in Fig. 1;
  • FIG. 3 is a schematic diagram of a scene for judging the distance of shooting damaged parts according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of another scene for judging the distance of shooting damaged parts according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a training method for a car damage prediction model provided by an embodiment of the present application
  • Fig. 6 is a schematic flow chart of the sub-steps of determining car damage information in Fig. 1;
  • FIG. 7 is a schematic diagram of a scenario for predicting car damage information provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of steps after the loss determination result in FIG. 1 is sent to the terminal;
  • FIG. 9 is a schematic block diagram of a vehicle damage assessment device provided by an embodiment of the present application.
  • FIG. 10 is a schematic block diagram of sub-modules of the vehicle damage assessment device in FIG. 9;
  • FIG. 11 is a schematic block diagram of the structure of a computer device according to an embodiment of the application.
  • the vehicle damage assessment method includes steps S10 to S30.
  • Step S10 Obtain a car damage image uploaded by the terminal, and preprocess the car damage image to obtain an image to be predicted, and the car damage image includes the damaged part of the damaged vehicle photographed by the terminal.
  • the terminal may be an electronic device such as a smart phone, a tablet computer, a notebook computer, a personal digital assistant, and a wearable device.
  • the user photographs the damaged part of the damaged vehicle through the terminal, and uploads the photographed car damage image to the server through the terminal for processing.
  • the server preprocesses the car damage image to obtain the image to be predicted.
  • the vehicle damage image includes the damaged part of the damaged vehicle photographed by the terminal.
  • the preprocessing may include image normalization processing, brightness equalization processing, and contrast enhancement processing.
  • obtaining the car damage image uploaded by the terminal in step S10 includes the following steps S11 to S14.
  • Step S11 Obtain an image shot by the terminal and shooting parameters of the image from the terminal.
  • the server when it obtains the car damage image uploaded by the terminal, it may obtain the image displayed on the shooting interface of the terminal and the shooting parameters of the image.
  • the shooting parameter may include a shooting distance, and the shooting distance refers to a distance from a camera of the terminal to a damaged part of the damaged vehicle.
  • the server may also determine shooting parameters such as the recognition degree and resolution of the image according to the image obtained from the terminal.
  • the degree of recognition refers to the degree of recognition of the damaged part.
  • Step S12 judging whether the image meets the loss-constant condition according to the shooting parameter and the image.
  • the server determines whether the image meets the loss assessment condition according to the shooting parameters and the image, and if the image meets the loss assessment condition, determines the image as the car damage image.
  • the loss determination condition may include that parameters such as shooting distance, recognition degree, and resolution are in a preset range. If the shooting distance, recognition degree, and resolution are all within a preset range, it is determined that the image meets the loss determination condition.
  • the quality of the car damage image can be ensured and the accuracy of the damage assessment result can be improved.
  • the server may determine whether the image meets the loss determination condition according to the difference between the shooting distance and a preset distance. Exemplarily, if the absolute value of the difference between the shooting distance and the preset distance is less than 30 cm, it is determined that the shooting distance satisfies the loss determination condition.
  • the preset distance may be 150 cm.
  • the server outputs "the distance is appropriate” on the shooting interface through the terminal. ". If the absolute value of the difference between the shooting distance and the preset distance is not less than 30 cm, it is determined that the image does not meet the loss determination condition, and the server outputs "the distance is too far” on the shooting interface through the terminal.
  • the captured image can reflect the damaged part as much as possible, and reduce unnecessary external areas, so that the accuracy of the subsequent damage determination results is higher.
  • the server may obtain the recognition degree of the image, and determine whether the recognition degree is greater than a preset recognition degree.
  • the recognition degree of the damaged part of the image is greater than the preset recognition degree, it is determined that the image meets the loss assessment condition.
  • the preset recognition degree may be 90%.
  • the server may capture images in the shooting interface of the terminal in real time through the terminal, and input the captured video frames into the trained car damage prediction model.
  • the car damage prediction model extracts features from the video frame and classifies the video frame according to the obtained feature map, and outputs the confidence of the corresponding category of the video frame. The confidence can be used to indicate the recognition of the damaged part in the video frame. .
  • the car damage prediction model outputs the confidence levels of multiple categories of the video frame, and takes the category corresponding to the maximum confidence level as the category of the video frame, and the maximum confidence level is the confidence level of the video frame.
  • the damaged parts are classified and identified through the car damage prediction model, and the photographed parts are adjusted according to the recognition results, which can ensure that the parts that need to be assessed are captured, improve the accuracy of subsequent damage assessments, and avoid unrecognizable or uncategorized parts.
  • the server may determine whether the resolution is greater than a preset resolution according to the resolution of the image. Exemplarily, if the resolution of the image is greater than the preset resolution, it is determined that the image meets the loss determination condition.
  • the preset resolution may be 100 PPI (Pixels Per Inch, pixels per inch).
  • the resolution of the image is greater than the preset resolution, for example, the resolution of the image is 120PPI, then it is determined that the image meets the loss fixing condition; if the resolution of the image If the resolution is not greater than the preset resolution, for example, if the resolution of the image is 80 PPI, the image is rejected and the user is prompted to retake the image and upload it through the terminal.
  • Step S13 If the image meets the loss assessment condition, determine the image as the car damage image.
  • the image is determined as the car damage image.
  • the server may prompt the user to upload the car damage image on the shooting interface of the terminal.
  • the server detects that the shooting distance satisfies the damage determination condition, the “distance is appropriate” is displayed in the shooting interface, and the damaged part and the damaged part are displayed in the shooting interface.
  • Corresponding recognition degree such as "rear fender (left), acquaintance degree 93%”.
  • the obtained car damage image includes the damaged parts of the damaged vehicle with a clearer and suitable size, which can improve the accuracy of the subsequent damage assessment results.
  • Step S14 If the image does not meet the loss assessment condition, determine a shooting reminder based on the image, and send the shooting reminder to the terminal.
  • the server if the image does not meet the condition of loss assessment, for example, the absolute value of the difference between the shooting distance and the preset distance is not less than 30 cm, the server will The shooting distance determines the shooting reminder, for example, the shooting reminder is "the distance is too far, the part cannot be detected".
  • the server sends the shooting prompt to the terminal, and the terminal displays the shooting prompt on a shooting interface.
  • the user corresponding to the terminal may adjust the shooting distance according to the shooting prompt until the captured image meets the loss assessment condition.
  • the quality of the car damage image can be ensured and the accuracy of the damage assessment result can be improved.
  • the server preprocesses the car damage image uploaded by the terminal to obtain the image to be predicted corresponding to the car damage image.
  • the pre-processing of the car damage image in step S10 to obtain the image to be predicted includes step S15 to step S17.
  • Step S15 Perform normalization processing on the car damage image to obtain a normalized image.
  • the server performs normalization processing on the car damage image to convert the car damage image into an image in a standard form.
  • the normalization process includes 4 steps, namely coordinate centering, x-shearing normalization, scaling normalization, and rotation normalization.
  • the normalization can be processed using functions such as premnmx, postmnmx, tramnmx, and mapminmax.
  • the premnmx function is used to convert the 0-255 UNIT data of the car damage image to a value between 0-1.
  • Normalizing the car damage image can find out the invariants in the car damage image, for example, it can reduce the interference of the car damage image due to uneven light.
  • Step S16 Perform brightness equalization processing on the normalized image to obtain a brightness equalized image.
  • the server performs brightness equalization processing on the standard image.
  • hue H represents the type of color
  • saturation S represents how close the color is to the spectral color
  • lightness V represents how bright the color is.
  • the hexagonal boundary represents hue H
  • the horizontal axis represents saturation S
  • the lightness V is measured along the vertical axis.
  • the server first converts the standard image to HSV color space, and obtains the hue of the standard image on the hexagonal boundary, the saturation on the horizontal axis, and the color on the vertical axis. Lightness. Then the server adjusts the brightness V component in the HSV color space, so that the overall brightness of the image is balanced.
  • the V component is adjusted in the vertical axis direction so that the V component reaches a preset brightness value, and the preset brightness value is used to indicate that the brightness of the image reaches an optimal value, such as 0.618.
  • Step S17 Perform contrast enhancement processing on the brightness-balanced image to obtain an image to be predicted.
  • the server performs contrast enhancement processing on the standard image after the brightness equalization processing to obtain the image to be predicted.
  • the histogram equalization algorithm redistributes image pixel values through nonlinear stretching of the image, so that the number of pixels within a certain gray scale range is approximately the same.
  • mapping method of the histogram equalization is:
  • Sk represents the gray-scale probability density distribution of the image
  • k represents the total number of gray levels in the image
  • n k represents the number of pixels of the k-th gray level
  • n represents the sum of pixels in the image.
  • the gray probability density of the image to be predicted is uniformly distributed; at the same time, increasing the dynamic range of the gray level of the image to be predicted can improve the contrast of the image to be predicted.
  • the overall brightness of the car damage image can be balanced and various parts and damaged parts can be more prominent.
  • the image to be predicted obtained after preprocessing can improve the accuracy of feature extraction in the car damage prediction model, and the obtained car damage prediction result is more accurate.
  • Step S20 Based on the car damage prediction model, determine the car damage information corresponding to the damaged vehicle according to the to-be-predicted image, where the car damage information includes damaged parts and repair categories.
  • the server inputs the to-be-predicted image into a car damage prediction model for car damage prediction, and the car damage prediction model outputs car damage information corresponding to the damaged vehicle.
  • the vehicle damage information includes damaged parts and repair categories.
  • the car damage information can be: the right rear door is scratched and paint needs to be touched up.
  • the vehicle damage prediction model may include an SSD network.
  • SSD network By extracting feature maps of different scales in the SSD network for detection, large-scale feature maps can be used to detect small objects, and small-scale feature maps can be used to detect large objects, which can adapt to different targets.
  • the server calculates the vehicle damage information corresponding to the damaged vehicle according to the GPU cluster.
  • a GPU (Graphics Processing Unit, graphics processing unit) cluster is a computer cluster, in which each node is equipped with a graphics processing unit. Because the GPU for general-purpose computing has a very high data parallel architecture, it can process a large number of data points in parallel, so that the GPU cluster can perform very fast calculations and improve computational throughput.
  • FIG. 5 is a schematic flowchart of a training method of a car damage prediction model provided by an embodiment of the present application.
  • the training method includes step S101 to step S105.
  • Step S101 Determine an initial vehicle damage prediction model.
  • the initial car damage prediction model is used to predict the car damage information corresponding to any car damage sample image, and obtain the predicted loss value corresponding to the damaged part and the repair type in the car damage information.
  • the initial vehicle damage prediction model can be any of the following networks: Single Shot Multibox Detector (SSD) network, Convolutional Neural Network (CNN), Restricted Bohr Restricted Boltzmann Machine (RBM) or Recurrent Neural Network (RNN).
  • SSD Single Shot Multibox Detector
  • CNN Convolutional Neural Network
  • RBM Restricted Bohr Restricted Boltzmann Machine
  • RNN Recurrent Neural Network
  • the initial vehicle loss prediction model is an SSD network.
  • the SSD network uses the VGG16 network structure as the basic model, and convolves the car damage sample images to obtain feature maps of different scales through the convolutional layers Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2, and Conv11_2.
  • the feature maps are used to predict the Describe the car damage information corresponding to the car damage sample image, such as the damaged part and repair category.
  • Step S102 Obtain the car damage sample image and the label information of the car damage sample image, and perform preprocessing on the car damage sample image to obtain a training sample image.
  • the car damage sample image includes the damaged part of the damaged vehicle.
  • the labeling information includes damaged labeling parts and repaired labeling categories.
  • the server configures a preset number of car damage sample images corresponding to each damaged part of the vehicle, and labels the damaged part and the repair category in the car damage sample image to obtain a car damage sample including the label information image.
  • the labeling information includes damaged labeling parts and repaired labeling categories.
  • the damaged marked parts and repair marked categories may include: damaged door handles, doors that need to be replaced or scratched, tires that need to be repainted, flat tires, left front doors that need to be repaired or scratched, need to be repainted, The scratched right front door, the left fender that needs to be repainted and scratched, the right fender that needs to be touched up and scratched, the front bumper that needs to be touched up, damaged, the rear bumper that needs to be repaired, the damaged rear bumper needs to be repaired, etc. .
  • the car damage sample image is preprocessed to obtain a training sample image for training the car damage prediction model.
  • the car damage sample image is preprocessed, such as normalization processing, brightness equalization processing, and contrast enhancement processing, so that the training sample images have the same size, and the overall brightness is balanced and each part of the image And the damaged parts are more prominent, which can effectively improve the accuracy of feature extraction of the training sample images in the car damage prediction model, and improve the accuracy of training.
  • the training sample images are divided into a training set of a first proportion and a verification set of a second proportion, where the first proportion may be 70% and the second proportion may be 30%.
  • the training set is used to train the initial car damage prediction model
  • the verification set is used to verify the initial car damage prediction model trained by the training set.
  • Step S103 Input the training sample image into the initial car damage prediction model to obtain car damage information corresponding to the training sample image, where the car damage information includes damaged parts and repair categories.
  • the above-mentioned training set is input into the initial car damage prediction model for convolution processing to obtain a feature map corresponding to the training set, and a priori box matching is performed on the feature map to obtain the feature
  • the prediction box corresponding to the graph is input into the initial car damage prediction model for convolution processing to obtain a feature map corresponding to the training set, and a priori box matching is performed on the feature map to obtain the feature
  • the prediction box corresponding to the graph includes vehicle damage information, such as predicted damaged parts and predicted repair categories.
  • each training sample image in the training set is input into the initial car damage prediction model, and each training sample image is convolved by each convolutional layer, and each convolutional layer uses two Different 3 ⁇ 3 convolution kernels perform convolution; the feature map output by one convolution kernel is used to calculate the confidence loss, and the feature map output by the other convolution kernel is used to calculate the positioning loss.
  • the a priori box and the real box are matched on the feature map to determine the training sample corresponding to the a priori box.
  • the real frame is the frame corresponding to the damaged labeled part and the repaired labeled category in the training sample image.
  • a priori boxes of different scales and aspect ratios are used, and the a priori boxes are used to determine training samples, and the training samples include positive samples and negative samples.
  • the prediction box corresponding to the a priori box is used to predict the regression of the damaged part and the classification of the repair category corresponding to the damaged part.
  • the prediction box is divided into two parts, the first part is the confidence of each repair category, and the second part is the position of the prediction box, which contains 4 values (cx, cy, w, h), which respectively represent the center coordinates of the prediction box And width and height.
  • the position of the prediction frame is the area of the damaged part.
  • the prediction frame is the actual selection of the a priori frame, and the prediction frame is based on the a priori frame, which can reduce the training difficulty to a certain extent.
  • the training samples are determined according to the Intersection Over Union (IOU) between the a priori box and the real box. If the IOU corresponding to the a priori box is greater than the IOU threshold, it is determined that the a priori box matches the real box, and the prediction box corresponding to the a priori box is marked as a positive sample; if the IOU corresponding to the a priori box is not greater than the IOU threshold, Then the a priori box does not match the real box, and the prediction box corresponding to the a priori box is marked as a negative sample.
  • IOU Intersection Over Union
  • the IOU represents the degree of overlap between the prior frame and the real frame
  • the Jaccard coefficient is used to calculate the IOU:
  • A represents the area of the a priori frame
  • B represents the area of the real frame
  • the IOU threshold may be 0.5.
  • the prediction box corresponding to the a priori box A is marked as a positive sample; If the IOU value is 0.7, the prediction box corresponding to the a priori box C is also marked as a positive sample.
  • Step S104 Calculate a predicted loss value according to the damaged part and the damaged marked part, as well as the repair category and the repair marked category.
  • the real frame obtained in the above steps includes the damaged label part and the repair label category, and the a priori frame includes the damaged part and the repair category.
  • the loss function is used to calculate the positioning loss and the confidence loss of the training sample corresponding to the training sample image.
  • the location loss refers to the position difference between the damaged part and the damaged labeled part
  • the confidence loss refers to the normalized loss value of the repair category corresponding to the damaged part.
  • the loss function is the weighted sum of the positioning loss (locatization loss, loc) and the confidence loss (confidence loss, conf), and the loss function L is defined as follows:
  • N is the number of positive samples
  • c is the predicted value of confidence
  • l is the predicted position of the predicted frame
  • g is the position parameter of the real frame
  • the weight coefficient ⁇ is set to 1 through cross-validation.
  • the location loss L conf represents the position difference between the predicted frame and the real frame.
  • the location loss L conf uses the square loss function (Smooth L1 loss).
  • the location loss L conf is defined as follows:
  • the square loss function is:
  • ⁇ cx, cy, w, h ⁇ respectively represent the center coordinates, width and height of the prediction frame or the real frame; Is the position prediction value of the i-th prediction box, Is the position of the j-th real frame; k represents the category of the real frame, that is, the repair category corresponding to the predicted damaged part. due to , So the positioning loss is only calculated for positive samples.
  • the confidence loss L conf is the softmax loss that calculates the confidence of all repair categories.
  • the input is the predicted value of the confidence of each repair category.
  • the confidence loss L conf is defined as:
  • i represents the i-th prediction frame
  • j represents the j-th real frame
  • Step S105 Adjust parameters in the initial car loss prediction model according to the predicted loss value to obtain a trained car loss prediction model.
  • the trained initial vehicle damage prediction model is verified according to the verification set.
  • the training sample images in the verification set are input into the initial car damage prediction model, and the corresponding prediction frame is output; wherein the prediction frame includes the predicted damaged part and the predicted repair category. Then, a predicted loss value is calculated according to the damaged part and the damaged marked part, and the repair category and the repair marked category.
  • the training ends. If the predicted loss value is greater than the preset loss value, increase the number of car damage sample images corresponding to each damaged part and re-execute the above steps S102 to S105 until the predicted loss value of the trained initial car damage prediction model is less than or If it is equal to the preset loss value, the training ends, and a trained car loss prediction model is obtained.
  • the training method provided in the foregoing embodiment can effectively improve the accuracy of feature extraction of the car damage sample image in the car damage prediction model by preprocessing the car damage sample image, and improve the accuracy of training; by dividing the training sample image For the training set and the verification set, the parameters of the initial car damage prediction model can be continuously adjusted according to the predicted loss value, which can improve the prediction accuracy and robustness of the trained car damage prediction model.
  • step S20 based on the car damage prediction model, determining the car damage information corresponding to the damaged vehicle according to the to-be-predicted image includes steps S21 to S25.
  • Step S21 Input the to-be-predicted image into the trained car damage prediction model for convolution processing to obtain a feature map corresponding to the to-be-predicted image.
  • the server inputs the to-be-predicted image into the trained car loss prediction model, and the to-be-predicted image is convolved through the convolutional layers Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2, and Conv11_2, and each convolution
  • the layers are convolved with two different 3 ⁇ 3 convolution kernels; the feature map output by one convolution kernel is used to calculate the confidence of the category, and the feature map output by the other convolution kernel is used to calculate the positioning of the regression .
  • Step S22 Perform vehicle damage prediction on the feature map according to a plurality of preset prediction frames, to obtain the category and confidence of each prediction frame, and the category includes a damaged part and a repair category.
  • multiple preset prediction frames are used for detection.
  • the preset prediction frame is the actual selection of the a priori frame, and the prediction frame is used to predict the category of the feature map.
  • the a priori box is used to determine training samples for training the vehicle damage prediction model.
  • the category of the prediction frame is first determined according to the category corresponding to the confidence level.
  • the category corresponding to the maximum confidence is the category of the prediction frame.
  • the categories include damaged parts and repair categories, such as damaged door handles, doors that need to be replaced or scratched, left front doors that need to be repainted or scratched, and right front doors that need to be repainted or scratched. Repainted, scratched left fender, need repainting, damaged front bumper, need repair, etc.
  • the category corresponding to the prediction frame A is category 1. If the category 1 is a scratched left front door that needs to be touched up, then the category of the prediction frame A is a scratched left front door that needs to be touched up.
  • Step S23 Determine a preset number of to-be-selected prediction boxes from the prediction boxes whose confidence is greater than the confidence threshold.
  • the server screens the prediction frames of the determined category according to the confidence threshold to obtain the screened prediction frames.
  • the confidence threshold may be 0.8. Filter out prediction boxes with confidence levels lower than 0.8, and keep prediction boxes with confidence levels greater than or equal to 0.8.
  • the server determines a preset number of prediction boxes to be selected from the screened prediction boxes.
  • the filtered prediction frames are arranged in descending order according to the confidence, the top k prediction frames are retained according to the preset number, and the remaining prediction frames are deleted.
  • the server sorts the prediction boxes in descending order of confidence, and retains the three prediction boxes with the highest confidence, and removes the remaining prediction boxes. , Get 3 predictive boxes to be selected.
  • Step S24 Calculate the degree of overlap between different prediction frames to be selected, and filter out the candidate prediction frames whose overlap degree is greater than the overlap degree threshold to obtain the target prediction frame.
  • the server calculates the degree of overlap between different prediction frames to be selected.
  • the degree of overlap refers to the ratio of the intersection area of two candidate prediction boxes to the combined area of the two candidate prediction boxes, and the overlap degree can be represented by Intersection over Union (IOU), and the calculation of IOU formula:
  • a and B respectively represent the area of two different prediction boxes to be selected.
  • the server filters out candidate prediction frames whose overlap degree is greater than the overlap degree threshold to obtain the target prediction frame.
  • the overlap threshold may be 0.5.
  • the server filters out candidate prediction boxes whose overlap degree is greater than the overlap degree threshold according to the NMS algorithm.
  • the candidate prediction boxes whose overlap degree is greater than the overlap degree threshold are eliminated, and a candidate prediction box with the greatest confidence is retained, that is, the target prediction box.
  • NMS Non-Maximum Suppression
  • the two candidate prediction boxes of the same category can be merged into one through the NMS algorithm. For example, if the categories corresponding to the two prediction boxes to be selected are both "left front door scratches, need to touch up", then the candidate prediction boxes of the same category are merged into one, and the target prediction frame is obtained as "left front door scratches”. Flowers, need touch-up paint”.
  • Step S25 Determine vehicle damage information corresponding to the damaged vehicle according to the category of the target prediction frame.
  • the server can determine the vehicle damage information corresponding to the damaged vehicle according to the category corresponding to the obtained target prediction frame.
  • the vehicle damage information may be represented by the category of the target prediction frame.
  • the server may determine that the car damage information corresponding to the damaged vehicle is “left front door is scratched and needs to be touched up” ", the damaged part is “left front door scratching”, and the repair category is "repainting needed”.
  • the server inputs the image to be predicted into the trained car damage prediction model, and the output of the car damage prediction model includes two prediction boxes to be selected, which are marked as box1 and box2, respectively.
  • box1 the category of box1 is "Scratch on the right rear door, need to touch up”
  • box2 the category of box2 is "Scratch on the right front door, need to touch up”.
  • Car damage prediction of the image to be predicted by the car damage prediction model can accurately obtain the car damage information corresponding to the damaged vehicle; the car damage information includes the damaged part and the repair category, which can provide richer information for damage determination.
  • the GPU cluster is used for calculation in the car damage prediction model, which can quickly process a large amount of data, shorten the detection time of the car damage prediction model, and quickly obtain the car damage information corresponding to the damaged vehicle.
  • Step S30 Obtain the maintenance information corresponding to the damaged part and the repair category, determine the damage assessment result of the damaged vehicle according to the maintenance information, and send the damage assessment result to the terminal.
  • the server needs to obtain the vehicle model corresponding to the damaged vehicle before querying the repair information corresponding to the damaged part and the repair category.
  • the server may prompt the user to enter the identification information by controlling the terminal to pop up a prompt box or issue a voice prompt, and obtain the corresponding insurance policy number according to the identification information, thereby obtaining the insurance policy according to the insurance policy number. State the model of the damaged vehicle.
  • the identification information may include a license plate number and a VIN (Vehicle Identification Number) identification code.
  • VIN identification code is composed of 17 letters and numbers. It is a set of characters designated by the manufacturer in order to identify the vehicle and has the unique identification of the vehicle.
  • the vehicle model may include a small car, a medium car, or a large car.
  • the vehicle type corresponding to the damaged vehicle may be a large vehicle.
  • the model of the damaged vehicle By reminding the user to input identification information, the model of the damaged vehicle can be obtained, and the maintenance information table corresponding to the model can be obtained from the database according to the model of the damaged vehicle.
  • the server may prompt the user to input the vehicle of the damaged vehicle through the terminal.
  • the maintenance information table may include a maintenance price table.
  • the server obtains a maintenance information table corresponding to the vehicle type from a database according to the vehicle type corresponding to the damaged vehicle.
  • the server obtains the maintenance price list corresponding to the vehicle model from the database, as shown in Table 1:
  • Table 1 shows the repair price list corresponding to different models
  • Model Repair price list Mini car/small car/compact car a Medium-sized car/medium-sized car/large car b SUV models/MPV models c Pickup/microface/light passenger d
  • SUV Sport Utility Vehicle
  • MPV Multi-Purpose Vehicles
  • the server may obtain the repair price list b corresponding to the vehicle type from the database.
  • the server queries the maintenance information table for repairing the damaged part according to the damaged part and the repair category.
  • the maintenance price list b corresponding to the large-sized vehicle in the database is shown in Table 2.
  • Table 2 is the repair price list corresponding to large vehicles b
  • the category refers to the repair category, which can include paint touch-up, replacement and repair;
  • the location refers to the damaged location, which can include vehicle parts such as doors, rear trunks, shock absorbers, brake discs, brake pads, and engines.
  • the repair category is "repainting required”
  • the server queries the repair price of the damaged part from the repair price list b, For example, the repair price corresponding to the damaged part is 100 yuan/time. If the damaged part is "damaged shock absorber" and the repair category is "replacement required”, the server queries the repair price of the damaged part from the repair price list b, for example, the damaged part corresponds to The repair price is 380 yuan/piece.
  • the server generates a damage assessment result of the damaged vehicle according to the repair information of the damaged part, and the damage assessment result includes a repair value.
  • the server calculates the repair value corresponding to the damaged part according to the repair price of the damaged part.
  • the damage assessment result corresponding to the damaged vehicle is generated according to the vehicle damage information and the repair value corresponding to the damaged part, and the damage assessment result is calculated.
  • the result is sent to the terminal corresponding to the damaged vehicle.
  • the damage determination result may include: the right rear door is damaged and the right front door is damaged, paint needs to be touched up, four shock absorbers are replaced, and the repair value is 1,720 yuan.
  • the server may also obtain the premium of the damaged vehicle according to the insurance policy number corresponding to the damaged vehicle. Data to obtain the annual premium increase of the damaged vehicle.
  • the annual premium increase refers to the increase in the premium of the next year after the damaged vehicle has been assessed for loss by car insurance. It should be noted that if the user chooses auto insurance to determine the loss, the fixed loss amount is equal to the repair value.
  • the server compares the fixed loss amount with the annual premium increase. Exemplarily, if the fixed loss amount is greater than the annual premium increase, a recommendation opinion of "recommended loss fixed” is output, and the user is advised to conduct a vehicle loss assessment; if the fixed loss amount is less than the annual premium increase, Then output the "recommended indefinite loss” recommendation, suggesting that the user does not conduct vehicle loss determination.
  • the recommendation opinion may be added to the result of the loss assessment and sent to the terminal, and the user obtains all the results through the terminal. State the results of the damage assessment and the recommendations.
  • step S30 sends the loss assessment result to the terminal, it further includes step S40 to step S70.
  • Step S40 If the loss determination confirmation information is obtained from the terminal, the location of the terminal is acquired, and the loss determination confirmation information is sent by the terminal according to the user's confirmation operation of the loss determination result.
  • the terminal displays the loss assessment result on a display screen.
  • the user can click or select the option of "recommend loss assessment” on the terminal, and the terminal will follow the user
  • the fixed loss confirmation operation sends a fixed loss confirmation message to the server.
  • the server obtains the loss assessment confirmation information sent by the terminal in response to the loss assessment confirmation operation performed by the user on the loss assessment result, the location information of the terminal is acquired.
  • the terminal may determine the location information of the terminal through a global navigation system, a Beidou satellite navigation system, a GLONASS positioning system, or a Galileo satellite navigation system.
  • Step S50 Determine a number of maintenance points located within a preset range of the damaged vehicle according to the position of the terminal, and obtain maintenance point information of each of the maintenance points.
  • the server searches for maintenance points within a preset range of the damaged vehicle according to the location information of the terminal.
  • the preset range may be 10Km.
  • the server obtains maintenance point information of each of the maintenance points.
  • the maintenance point information may include the distance, maintenance price, and service score of each of the maintenance points.
  • the server locates maintenance points within the preset range, and obtains the names and distances of several maintenance points.
  • the distance refers to the distance from the terminal to the maintenance point.
  • the server divides the distances of several maintenance points. Exemplarily, if the distance is less than 3Km, it belongs to the short distance; if the distance is 3Km ⁇ 8Km, it belongs to the middle distance; if the distance is 8Km ⁇ 10Km, it belongs to the long distance.
  • the server inputs data such as the names and distances of several maintenance points into a big data model to obtain maintenance information and service scores corresponding to the several maintenance points.
  • the big data model can process the data of maintenance points through operations such as dimensionality reduction, regression, clustering, classification and association. Through a series of operation processing, the big data model can output related data of several repair points, such as repair price data and service score data of the repair points.
  • the server may obtain repair prices and service scores of several repair points through a big data model.
  • the repair price may include three levels of high, medium, and low; and the service score may include three levels of high, medium, and low.
  • the maintenance point information of the maintenance point A may be close range, the maintenance price is a medium level, and the service evaluation is a high level.
  • Step S60 Based on the preset maintenance point ranking table, the recommended score of each maintenance point is determined according to the maintenance point information of each maintenance point.
  • the server calculates the recommended score of each maintenance point according to the maintenance point information of each maintenance point, and generates a maintenance recommendation list corresponding to each maintenance point.
  • the maintenance point ranking table includes three types of distance, maintenance price, and service score, as well as scores of the corresponding levels of each type, as shown in Table 3:
  • Table 3 is the sorting list of maintenance points
  • the server calculates the recommended score of each maintenance point according to a weighting algorithm, and generates a recommended score table according to each of the maintenance points and the recommended score corresponding to each of the maintenance points.
  • the server generates a recommended score table according to each maintenance point and the recommended score corresponding to each maintenance point, as shown in Table 4.
  • Table 4 is the recommended score table
  • Step S70 Push maintenance point information of at least one maintenance point to the terminal according to the recommended score.
  • the server generates a maintenance recommendation list corresponding to the damaged vehicle according to the recommended score, and pushes the maintenance point information corresponding to the maintenance point in the maintenance recommendation list to the terminal.
  • the server deletes maintenance points whose recommended scores are lower than a preset threshold from the recommended score table to obtain a maintenance recommendation list corresponding to the damaged vehicle.
  • the preset score may be 80 points.
  • the maintenance points C and D will be removed from all points.
  • the recommended score table is deleted, and the maintenance recommendation list corresponding to the recommended score table is obtained, as shown in Table 5:
  • Table 5 is a list of recommended maintenance
  • the server may push the maintenance point information of at least one maintenance point in the maintenance recommendation list to the terminal.
  • the server may push maintenance point information such as the distance, maintenance price, and service score of the maintenance point A to the terminal.
  • the server may also push the repair point information such as the distance between the repair point A and the repair point B, the repair price, and the service score to the terminal together.
  • Get the maintenance recommendation list by obtaining the distance, maintenance price and service score of the maintenance point, and obtain the maintenance recommendation list according to the preset maintenance point sorting table; then recommend the maintenance point to the user according to the maintenance recommendation list, allowing the user to select the appropriate maintenance point, and the recommendation accuracy rate is high , Help improve user experience.
  • the vehicle damage assessment method provided in the above embodiment can ensure the quality of the car damage image and improve the accuracy of the damage assessment result by judging whether the car damage image meets the condition of the damage assessment when acquiring the car damage image uploaded by the terminal; Preprocessing can improve the results of car damage prediction; through the trained car damage prediction model to predict the image to be predicted, the car damage information corresponding to the damaged vehicle can be accurately obtained, the prediction efficiency is higher, and the damage estimation time is saved; According to the repair information table, the repair information corresponding to the damaged part and repair category in the car damage information can be obtained, and then the damage assessment result can be obtained; by obtaining the distance, repair price and service score of each repair point, and sort according to the preset repair point Obtain a list of recommended repairs, recommend suitable repair points to users, and solve the two problems of vehicle damage assessment and repair quickly and efficiently, saving time and improving user experience.
  • FIG. 9 is a schematic block diagram of a vehicle damage assessment device 200 according to an embodiment of the present application.
  • the vehicle damage assessment device is used to execute the aforementioned vehicle damage assessment method.
  • the vehicle damage assessment device can be configured in a server or a terminal.
  • the vehicle loss assessment device 200 includes: an image acquisition module 201, a vehicle loss prediction module 202, and a loss assessment generation module 203.
  • the image acquisition module 201 is configured to acquire a car damage image uploaded by a terminal, and preprocess the car damage image to obtain an image to be predicted.
  • the car damage image includes the damaged part of the damaged vehicle photographed by the terminal.
  • the image acquisition module 201 includes: an image acquisition sub-module 2011, a judgment sub-module 2012, a normalization sub-module 2013, a brightness sub-module 2014, and a contrast sub-module 2015.
  • the image acquisition sub-module 2011 is configured to acquire the image taken by the terminal and the photographing parameters of the image from the terminal.
  • the judging sub-module 2012 is configured to judge whether the image meets the loss assessment condition according to the shooting parameters and the image, and if the image meets the loss assessment condition, determine the image as the car damage image; if If the image does not meet the loss determination condition, a shooting reminder is determined according to the image, and the shooting reminder is sent to the terminal.
  • the normalization sub-module 2013 is used to perform normalization processing on the car damage image to obtain a normalized image.
  • the brightness sub-module 2014 is used to perform brightness equalization processing on the normalized image to obtain a brightness equalized image.
  • the contrast sub-module 2015 is used to perform contrast enhancement processing on the brightness-balanced image to obtain the image to be predicted.
  • the car damage prediction module 202 is configured to determine the car damage information corresponding to the damaged vehicle according to the to-be-predicted image based on the car damage prediction model.
  • the car damage information includes the damaged part and the repair category.
  • the car loss prediction module 202 includes: a convolution sub-module 2021, a car loss prediction sub-module 2022, a prediction frame determination sub-module 2023, an overlap degree calculation sub-module 2024, and a car loss Determine the sub-module 2025.
  • the convolution sub-module 2021 is configured to input the to-be-predicted image into a trained car damage prediction model for convolution processing to obtain a feature map corresponding to the to-be-predicted image.
  • the car damage prediction sub-module 2022 is configured to perform car damage prediction on the feature map according to a plurality of preset prediction frames to obtain the category and confidence of each prediction frame.
  • the categories include damaged parts and repair categories.
  • the prediction frame determination sub-module 2023 is used to determine a preset number of candidate prediction frames from the prediction frames whose confidence is greater than the confidence threshold.
  • the overlap degree calculation sub-module 2024 is used to calculate the degree of overlap between different predictive frames to be selected, and filter out the predictive frames to be selected whose overlap degree is greater than the overlap degree threshold to obtain the target prediction frame.
  • the vehicle damage determination sub-module 2025 is configured to determine the vehicle damage information corresponding to the damaged vehicle according to the category of the target prediction frame.
  • the damage assessment generating module 203 is configured to obtain the repair information corresponding to the damaged part and the repair category, determine the damage assessment result of the damaged vehicle according to the repair information, and send the damage assessment result to the terminal .
  • the fixed loss generating module 203 includes: a vehicle model acquisition submodule 2031, a maintenance query submodule 2032, and a fixed loss calculation submodule 2033.
  • the vehicle model acquisition sub-module 2031 is configured to acquire the vehicle model corresponding to the damaged vehicle, and obtain the maintenance information table corresponding to the vehicle model from the database.
  • the maintenance query submodule 2032 is configured to query the maintenance information for repairing the damaged part in the maintenance information table according to the damaged part and the repair category.
  • the damage calculation sub-module 2033 is used to generate the damage measurement result of the damaged vehicle according to the maintenance information of the damaged part.
  • the vehicle damage assessment device 200 further includes: a location acquisition module 204, an information acquisition module 205, a score generation module 206, and a push module 207.
  • the location acquisition module 204 is configured to obtain the location of the terminal if the loss assessment confirmation information is obtained from the terminal, the loss assessment confirmation information being sent by the terminal according to the user's confirmation operation of the loss assessment result.
  • the information acquisition module 205 is configured to determine a number of maintenance points located within a preset range of the damaged vehicle according to the location of the terminal, and obtain maintenance point information of each of the maintenance points.
  • the score generation module 206 is configured to determine the recommended score of each maintenance point according to the maintenance point information of each maintenance point based on a preset maintenance point ranking table.
  • the pushing module 207 is configured to push the maintenance point information of at least one maintenance point to the terminal according to the recommended score.
  • the vehicle loss assessment device 200 further includes: a model determination module 208, a sample image acquisition module 209, a vehicle loss training module 210, a loss value calculation module 211 and a parameter adjustment module 212.
  • the model determination module 208 is used to determine the initial vehicle damage prediction model.
  • the sample image acquisition module 209 is used to acquire the car damage sample image and the label information of the car damage sample image, and preprocess the car damage sample image to obtain a training sample image.
  • the car damage sample image includes the damaged vehicle Damaged part
  • the labeling information includes the damaged labeling part and repair labeling category.
  • the car damage training module 210 is configured to input the training sample image into the initial car damage prediction model to obtain car damage information corresponding to the training sample image.
  • the car damage information includes damaged parts and repair categories.
  • the loss value calculation module 211 is configured to calculate a predicted loss value according to the damaged part and the damaged marked part, as well as the repair category and the repair marked category.
  • the parameter adjustment module 212 is configured to adjust the parameters in the initial car loss prediction model according to the predicted loss value to obtain a trained car loss prediction model.
  • the above-mentioned apparatus can be implemented in the form of a computer program, and the computer program can be run on the computer device as shown in FIG. 11.
  • FIG. 11 is a schematic block diagram of a structure of a computer device according to an embodiment of the present application.
  • the computer device may be a server.
  • the computer device includes a processor and a memory connected through a system bus, where the memory may include a non-volatile storage medium and an internal memory.
  • the processor is used to provide computing and control capabilities and support the operation of the entire computer equipment.
  • the internal memory provides an environment for the operation of the computer program in the non-volatile storage medium.
  • the processor can make the processor execute any method for determining vehicle damage.
  • the processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), and application specific integrated circuits (Application Specific Integrated Circuits). Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor.
  • the processor is used to run a computer program stored in a memory to implement the following steps:
  • the car damage image uploaded by the terminal preprocess the car damage image to obtain the image to be predicted, the car damage image includes the damaged part of the damaged vehicle photographed by the terminal; based on the car damage prediction model, according to the The image to be predicted determines the damage information corresponding to the damaged vehicle, the damage information includes the damaged part and the repair category; the repair information corresponding to the damaged part and the repair category is acquired, and the repair information is determined according to the repair information.
  • the damage assessment result of the damaged vehicle is described, and the damage assessment result is sent to the terminal.
  • the processor is used to implement the following when acquiring the car damage image uploaded by the terminal:
  • the image captured by the terminal and the shooting parameters of the image from the terminal determine whether the image meets the loss determination condition according to the shooting parameters and the image; if the image meets the loss determination condition, change The image is determined to be the car damage image; if the image does not meet the loss determination condition, a shooting reminder is determined according to the image, and the shooting reminder is sent to the terminal.
  • the processor when the processor implements preprocessing of the car damage image to obtain the image to be predicted, it is used to implement:
  • the processor when the processor is used to determine the car damage information corresponding to the damaged vehicle according to the to-be-predicted image based on the car damage prediction model, it is used to achieve:
  • the processor is configured to obtain the repair information corresponding to the damaged part and the repair category, and determine the damage assessment result of the damaged vehicle according to the repair information, to achieve:
  • the processor after the processor realizes sending the loss determination result to the terminal, it is further configured to realize:
  • the location of the terminal is acquired.
  • the loss determination information is sent by the terminal according to the user's confirmation operation of the loss determination result; determined according to the location of the terminal A number of repair points located within the preset range of the damaged vehicle, and the repair point information of each repair point is obtained; based on the preset sorting table of repair points, each repair point is determined according to the repair point information of each repair point The recommended score of the maintenance point; and the maintenance point information of at least one maintenance point is pushed to the terminal according to the recommended score.
  • the processor before the processor implements a vehicle damage prediction model based on the vehicle damage prediction model and determines the vehicle damage information corresponding to the damaged vehicle according to the to-be-predicted image, it is further configured to implement:
  • Determine the initial car damage prediction model obtain the car damage sample image and the label information of the car damage sample image, preprocess the car damage sample image to obtain a training sample image, the car damage sample image includes the damage of the damaged vehicle Damaged parts, the labeling information includes damaged labeling parts and repaired labeling categories; the training sample images are input into the initial car damage prediction model to obtain car damage information corresponding to the training sample images, and the car damage information includes Damaged part and repair category; calculate a predicted loss value according to the damaged part and the damaged label part, as well as the repair category and the repair label category; adjust the initial car damage prediction according to the predicted loss value Parameters in the model to obtain a trained car damage prediction model.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the present application Any vehicle damage method provided in the embodiment.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the computer-readable storage medium may be the internal storage unit of the computer device described in the foregoing embodiment, for example, the hard disk or memory of the computer device.
  • the computer-readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a smart media card (SMC), or a secure digital card equipped on the computer device. , SD Card, Flash Card, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

一种车辆定损方法、装置、计算机设备和存储介质,可在人工智能中实现。该车辆定损方法包括:获取终端上传的车损图像,对车损图像进行预处理得到待预测图像,车损图像包括终端拍摄的受损车辆的受损部位(S10);基于车损预测模型,根据待预测图像确定受损车辆对应的车损信息,车损信息包括受损部位和修复类别(S20);获取与受损部位和修复类别对应的维修信息,根据维修信息确定受损车辆的定损结果,并将定损结果发送到终端(S30)。该方法可以准确得到受损车辆对应的车损信息,并根据车损信息对应的维修信息生成受损车辆的定损结果并发送到终端,解决了用户的车辆定损难题,提高了用户体验度。

Description

车辆定损方法、装置、计算机设备和存储介质
本申请要求于2020年01月13日提交中国专利局、申请号为202010032163.3,发明名称为“车辆定损方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及预测模型领域,尤其涉及一种车辆定损方法、装置、计算机设备和存储介质。
背景技术
在现有的车型定损过程中,在业务员没能及时到达现场对车辆的受损程度进行定损时,通过车主通过拍摄车辆的受损部位的照片并上传给业务员进行判断。
发明人意识到,由于缺乏经验,车主自行拍摄照片时,经常采集到无法进行定损的照片。车主在重新拍摄照片时,已经丧失最佳的拍摄时机,这严重影响定损处理效率和用户定损服务体验。另外,车主在拍摄照片时,无法自行判断车辆的受损程度以及需要的维修信息。
发明内容
本申请提供了一种车辆定损方法、装置、计算机设备和存储介质,可以更精确地实现受损车辆的定损预测,预测效率较高。
第一方面,本申请提供了一种车辆定损方法,所述方法包括:
获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位;
基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别;
获取与所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
第二方面,本申请还提供了一种车辆定损装置,所述装置包括:
图像获取模块,用于获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位;
车损预测模块,用于基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别;
定损生成模块,用于获取所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
第三方面,本申请还提供了一种计算机设备,所述计算机设备包括存储器和处理器;
所述存储器,用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时实现以下步骤:
获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图 像包括所述终端拍摄的受损车辆的受损部位;
基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别;
获取与所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
第四方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行时,用于实现以下步骤:
获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位;
基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别;
获取与所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
本申请公开了一种车辆定损方法、装置、计算机设备和存储介质,通过在获取终端上传的车损图像时进行预处理,可以确保车损图像的质量,提高定损结果的准确率;通过训练好的车损预测模型对待预测图像进行车损预测,可以准确得到受损车辆对应的车损信息,预测效率较高,节省定损时间;之后根据维修信息表得到与车损信息中的受损部位、修复类别对应的维修信息,进而得到定损结果,不仅解决了用户的定损难题,而且还提高了用户的体验度。
附图说明
图1是本申请的实施例提供的一种车辆定损方法的示意流程图;
图2是图1中获取车损图像与预处理的子步骤示意流程图;
图3是本申请实施例提供的判断拍摄受损部位的距离的场景示意图;
图4是本申请实施例提供的判断拍摄受损部位的距离的另一场景示意图;
图5是本申请实施例提供的一种车损预测模型的训练方法的示意流程图;
图6是图1中确定车损信息的子步骤示意流程图;
图7是本申请实施例提供的预测车损信息的场景示意图;
图8是图1中定损结果发送到终端之后的步骤示意流程图;
图9是本申请实施例提供的一种车辆定损装置的示意性框图;
图10是图9中车辆定损装置的子模块的示意性框图;
图11为本申请实施例提供的一种计算机设备的结构示意性框图。
具体实施方式
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
如图1所示,车辆定损方法包括步骤S10至步骤S30。
步骤S10、获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位。
示例性的,所述终端可以是智能手机、平板电脑、笔记本电脑、个人数字助理和穿戴式设备等电子设备。
在一些实施例中,用户通过所述终端拍摄受损车辆中的受损部位,并将拍摄到的车损图像通过所述终端上传到服务器中处理。所述服务器在获取到所述终端上传的车损图像后,对所述车损图像进行预处理得到待预测图像。
其中,所述车损图像包括所述终端拍摄的受损车辆的受损部位。
需要说明的,所述预处理可以包括图像归一化处理、亮度均衡处理和对比度增强处理。
请参阅图2,步骤S10中获取终端上传的车损图像,包括以下步骤S11至步骤S14。
步骤S11、从所述终端获取所述终端拍摄的图像以及所述图像的拍摄参数。
具体地,所述服务器在获取所述终端上传的车损图像时,可以获取所述终端的拍摄界面显示的图像以及所述图像的拍摄参数。
示例性的,所述拍摄参数可以包括拍摄距离,所述拍摄距离是指所述终端的摄像头到所述受损车辆的受损部位的距离。
示例性的,所述服务器还可以根据从所述终端获取的图像确定图像的识别度与分辨率等拍摄参数。其中,识别度是指对受损部位的识别程度。
步骤S12、根据所述拍摄参数和所述图像判断所述图像是否符合定损条件。
具体地,所述服务器根据所述拍摄参数和所述图像判断所述图像是否符合定损条件,若所述图像符合所述定损条件,将所述图像确定为所述车损图像。
示例性的,所述定损条件可以包括拍摄距离、识别度和分辨率等参数处于预设的范围。若所述拍摄距离、识别度和分辨率都处于预设的范围内,则判定所述图像符合所述定损条件。
通过对终端上传的车损图像判断是否符合定损条件,可以确保车损图像的质量,提高定损结果的准确率。
在一些实施例中,所述服务器可以根据所述拍摄距离与预设距离之间的差值判断所述图像是否符合所述定损条件。示例性的,若所述拍摄距离与预设距离之差的绝对值小于30cm,则判定所述拍摄距离满足所述定损条件。
其中,所述预设距离可以取150cm。示例性的,若所述拍摄距离与所述预设距离之差的绝对值小于30cm,则判定所述拍摄距离符合所述定损条件,所述服务器通过所述终端在拍摄界面输出“距离合适”。若所述拍摄距离与所述预设距离之差的绝对值不小于30cm,则判定所述图像不符合所述定损条件,所述服务器通过所述终端在拍摄界面输出“距离太远”。
通过对拍摄距离进行判断,可以使拍摄得到的图像尽可能反映受损部位,并减少外部不必要的区域,从而使得后续定损结果的准确率更高。
在另一些实施例中,所述服务器可以获取所述图像的识别度,并判定所述识别度是否大于预设识别度。
示例性的,若所述图像的受损部位的识别度大于所述预设识别度,则判定所述图像符合所述定损条件。其中,所述预设识别度可以是90%。
在本实施例中,所述服务器可以通过所述终端实时抓取所述终端的拍摄界面内的画面,将抓取到的视频帧输入训练好的车损预测模型中。该车损预测模型对视频帧进行提取特征并根据得到的特征图对视频帧进行分类,输出视频帧对应类别的置信度,所述置信度可以用于表示视频帧中受损部位对应的识别度。
其中,车损预测模型输出所述视频帧的多个类别的置信度,取最大置信度对应的类别作为所述视频帧的类别,所述最大置信度为所述视频帧的置信度。
通过车损预测模型对受损部位进行分类识别,根据识别结果调整拍摄的部位,可以确保拍摄到需要定损的部位,提高后续的定损准确率,避免拍摄到无法识别或无法分类的部位。
在另一些实施例中,所述服务器可以根据所述图像的分辨率判定所述分辨率是否大于预设分辨率。示例性的,若所述图像的分辨率大于所述预设分辨率,则判定所述图像符合所述定损条件。
其中,所述预设分辨率可以为100PPI(Pixels Per Inch,每英寸像素)。
在本实施例中,若所述图像的分辨率大于所述预设分辨率,例如所述图像的分辨率为120PPI,则判定所述图像符合所述定损条件;若所述图像的分辨率不大于所述预设分辨率,例如所述图像的分辨率为80PPI,则拒绝接收所述图像并通过所述终端提示用户重新拍摄图像并上传。
通过判断图像的分辨率,避免将低分辨率的图像作为车损图像去定损,可以确保上传的车损图像的分辨率满足要求,便于进行后续的预处理与车损预测,提高定损结果的准确性。
步骤S13、若所述图像符合所述定损条件,将所述图像确定为所述车损图像。
示例性的,若所述图像符合所述定损条件,将所述图像确定为所述车损图像。所述服务器可以在所述终端的拍摄界面提示用户将所述车损图像上传。
在一些实施例中,如图3所示,若所述服务器检测到拍摄距离满足所述定损条件,则拍摄界面中显示“距离合适”,同时在拍摄界面中显示受损部位以及受损部位对应的识别度,例如“后叶子板(左),相识度93%”。
通过在终端上提示用户上传符合定损条件的车损图像,得到的车损图像包括将更为清晰、大小合适的受损车辆的受损部位,可以提高后续定损结果的准确率。
步骤S14、若所述图像不符合定损条件,根据所述图像确定拍摄提示,并将所述拍摄提示发送给所述终端。
在一些实施例中,如图4所示,若所述图像不符合定损条件,例如所述拍摄距离与所述预设距离之差的绝对值不小于30cm,所述服务器根据所述图像的拍摄距离确定拍摄提示, 例如拍摄提示为“距离太远,检测不到部位”。所述服务器将所述拍摄提示发送到所述终端中,所述终端在拍摄界面显示所述拍摄提示。所述终端对应的用户可以根据所述拍摄提示,调整拍摄距离,直到拍摄的图像符合所述定损条件为止。
通过对终端上传的车损图像判断是否符合定损条件,可以确保车损图像的质量,提高定损结果的准确率。
在本申请的实施例中,所述服务器对所述终端上传的车损图像进行预处理,得到所述车损图像对应的待预测图像。请参阅图2,步骤S10中对所述车损图像进行预处理得到待预测图像,包括步骤S15至步骤S17。
步骤S15、对所述车损图像进行归一化处理,得到归一化后的图像。
在一些实施例中,所述服务器对所述车损图像进行归一化处理,以将所述车损图像转换成标准形式的图像。
需要说明的是,归一化的过程包括4个步骤,即坐标中心化、x-shearing归一化、缩放归一化和旋转归一化。示例性的,归一化可以使用premnmx、postmnmx、tramnmx、mapminmax等函数进行处理。
在本实施例中,使用premnmx函数将所述车损图像的0-255的UNIT型数据转换到0-1之间。
对车损图像进行归一化处理,可以找出车损图像中的不变量,例如可以减小车损图像由于光线不均匀造成的干扰。
步骤S16、对所述归一化后的图像进行亮度均衡处理,得到亮度均衡后的图像。
示例性的,基于HSV颜色空间,所述服务器对标准图像进行亮度均衡处理。
可以理解的,在HSV颜色空间模型中,色调H表示颜色的类型,饱和度S表示颜色接近光谱色的程度,明度V表示颜色明亮的程度。其中,六边形边界表示色调H,水平轴表示饱和度S,明度V沿垂直轴测量。
在一些实施例中,所述服务器先将所述标准图像转换至HSV颜色空间,分别得到所述标准图像在六边形边界上的色调,在水平轴上的饱和度,以及在垂直轴上的明度。然后所述服务器在HSV颜色空间中调节明度V分量,使得图像整体亮度均衡。示例性的,在垂直轴方向调节V分量,使V分量达到预设亮度值,所述预设亮度值用于表示图像的亮度达到最佳值,例如0.618。
步骤S17、对所述亮度均衡后的图像进行对比度增强处理,得到待预测图像。
在一些实施例中,根据直方图均衡算法,所述服务器对所述亮度均衡处理后的标准图像进行对比度增强处理,得到待预测图像。
需要说明的是,直方图均衡化算法通过对图像进行非线性拉伸,重新分配图像像素值,使一定灰度范围内的像素数量大致相同。
所述直方图均衡化的映射方法为:
Figure PCTCN2020099268-appb-000001
式中,S k表示图像灰度概率密度分布,k表示图像中灰度级总数,n k表示第k灰度级的像素个数,n表示图像中像素的总和。
通过对标准图像的像素灰度做映射变换,得到的待预测图像的灰度概率密度呈均匀分布;同时,增加待预测图像的灰度动态范围,可以提高待预测图像的对比度。
通过对车损图像进行预处理,可以使车损图像的整体亮度均衡和各个部位以及损坏的部位更突出。而且经预处理后得到的待预测图像,可以提高在车损预测模型中特征提取的准确率,得到的车损预测结果更准确。
步骤S20、基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别。
具体地,所述服务器将所述待预测图像输入车损预测模型进行车损预测,所述车损预测模型输出所述受损车辆对应的车损信息。
示例性的,所述车损信息包括受损部位和修复类别。例如,车损信息可以是:右后门刮花,需补漆。
示例性的,所述车损预测模型可以包括SSD网络。通过在SSD网络提取不同尺度的特征图来做检测,大尺度特征图可以用来检测小物体,而小尺度特征图用来检测大物体,可以适应不同的目标。
具体地,所述服务器根据GPU集群计算所述受损车辆对应的车损信息。
需要说明的是,GPU(Graphics Processing Unit,图形处理单元)集群是一个计算机集群,其中每个节点配备有图形处理单元。由于通用计算的GPU具有很高的数据并行架构,可以并行处理大量的数据点,从而可以使GPU集群执行非常快速的计算,提高计算吞吐量。
具体地,所述服务器在基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息之前,还需要对初始的车损预测模型进行训练,得到训练好的车损预测模型。请参阅图5,图5是本申请的实施例提供一种车损预测模型的训练方法的示意性流程图。所述训练方法,包括步骤S101至步骤S105。
步骤S101、确定初始车损预测模型。
示例性的,所述初始车损预测模型用于预测任一车损样本图像对应的车损信息,获取车损信息中的受损部位与修复类型对应的预测损失值。
可以理解的,所述初始车损预测模型可以是如下任一网络:单目标多框检测器(Single Shot Multibox Detector,SSD)网络,卷积神经网络(Convolutional Neural Network,CNN)、受限玻尔兹曼机(Restricted Boltzmann Machine,RBM)或循环神经网络(Recurrent Neural Network,RNN)。
在本实施例中,所述初始车损预测模型为SSD网络。
具体地,SSD网络采用VGG16网络结构做基础模型,通过卷积层Conv4_3、Conv7,Conv8_2,Conv9_2,Conv10_2,Conv11_2将车损样本图像卷积得到不同尺度的特征图,所述特征图用于预测所述车损样本图像对应的车损信息,例如受损部位和修复类别。
步骤S102、获取车损样本图像和所述车损样本图像的标注信息,对所述车损样本图像 进行预处理得到训练样本图像,所述车损样本图像包括受损车辆的受损部位,所述标注信息包括受损标注部位和修复标注类别。
具体地,所述服务器配置车辆各个受损部位对应的预设数量的车损样本图像,并在所述车损样本图像中进行受损部位与修复类别的标注,得到包括标注信息的车损样本图像。
其中,所述标注信息包括受损标注部位和修复标注类别。示例性的,受损标注部位和修复标注类别可以包括:损坏的门把手,需要更换、刮花的车门,需要补漆、漏气的轮胎,需要维修、刮花的左前门,需要补漆、刮花的右前门,需要补漆、刮花的左叶子板,需要补漆、刮花的右叶子板,需要补漆、损坏的前保险杠,需要维修、损坏的后保险杠,需要维修等。
具体地,将所述车损样本图像进行预处理,以获得训练所述车损预测模型的训练样本图像。
在一些实施例中,对所述车损样本图像进行预处理,例如归一化处理、亮度均衡处理和对比度增强处理,使得所述训练样本图像具有相同的尺寸,整体亮度均衡和图像中各个部位以及损坏的部位更突出,可以有效提高所述训练样本图像在车损预测模型中的特征提取的准确率,提高训练的精确度。
在具体的实现过程中,将所述训练样本图像分为第一比例的训练集和第二比例的验证集,其中,第一比例可以是70%,第二比例可以是30%。
需要说明的是,所述训练集用于训练所述初始车损预测模型,所述验证集用于验证经所述训练集训练后的初始车损预测模型。
步骤S103、将所述训练样本图像输入所述初始车损预测模型,得到所述训练样本图像对应的车损信息,所述车损信息包括受损部位和修复类别。
具体地,将上述的所述训练集输入所述初始车损预测模型中进行卷积处理,得到所述训练集对应的特征图,并对所述特征图进行先验框匹配,得到所述特征图对应的预测框。其中,所述预测框包括车损信息,例如预测的受损部位与预测的修复类别。
在具体的实现过程中,将所述训练集中的各个训练样本图像分别输入所述初始车损预测模型中,每一训练样本图像经各卷积层进行卷积,各卷积层分别用两个不同的3×3的卷积核进行卷积;其中一个卷积核输出的特征图用于计算置信度损失,另一个卷积核输出的特征图用于计算定位损失。
在具体的实现过程中,对特征图进行先验框与真实框匹配,以确定所述先验框对应的训练样本。其中,真实框是在训练样本图像中受损标注部位和修复标注类别对应的框。
在所述初始车损预测模型采用了不同尺度和长宽比的先验框,先验框用于确定训练样本,训练样本包括正样本与负样本。先验框对应的预测框用于预测受损部位的回归与受损部位对应的修复类别的分类。
其中,预测框分为两个部分,第一部分是各个修复类别的置信度,第二部分就是预测框的位置,包含4个值(cx,cy,w,h),分别表示预测框的中心坐标以及宽高。预测框的位置即是受损部位的区域。
可以理解的,预测框是先验框的实际选取,预测框是以先验框为基准,可以在一定程度上减少训练难度。
在具体的实现过程中,根据先验框与真实框之间的交并比(Intersection Over Union,IOU),确定训练样本。若先验框对应的IOU大于IOU阈值,则判断所述先验框与真实框匹配,将所述先验框对应的预测框标记为正样本;若先验框对应的IOU不大于IOU阈值,则所述先验框与真实框不匹配,将所述先验框对应的预测框标记为负样本。
其中,所述IOU表示先验框与真实框之间的重叠度,使用Jaccard系数计算IOU:
Figure PCTCN2020099268-appb-000002
式中,A表示先验框的面积,B表示真实框的面积。
示例性的,所述IOU阈值可以是0.5。
在一些实施例中,若先验框A与真实框B的IOU值为0.9,大于IOU阈值,则将先验框A对应的预测框标记为正样本;若先验框C与真实框B的IOU值为0.7,则将先验框C对应的预测框也标记为正样本。
步骤S104、根据所述受损部位和所述受损标注部位,以及所述修复类别和所述修复标注类别计算预测损失值。
可以理解的,上述步骤得到的真实框包括受损标注部位和修复标注类别,先验框包括受损部位与修复类别。
具体地,采用损失函数计算训练样本图像对应的训练样本的定位损失与置信度损失。其中,定位损失是指受损部位与受损标注部位之间的位置差,置信度损失是指受损部位对应的修复类别的归一化损失值。
需要说明的是,损失函数为定位损失(locatization loss,loc)与置信度损失(confidence loss,conf)的加权和,损失函数L的定义如下:
Figure PCTCN2020099268-appb-000003
式中,N是正样本数量,c为置信度的预测值,l为预测框的位置预测值,而g是真实框的位置参数;权重系数α通过交叉验证设置为1。
其中,定位损失L conf表示计算预测框与真实框之间的位置差,定位损失L conf采用平方损失函数(Smooth L1 loss),定位损失L conf的定义如下:
Figure PCTCN2020099268-appb-000004
其中,平方损失函数为:
Figure PCTCN2020099268-appb-000005
式中,{cx,cy,w,h}分别表示预测框或真实框的中心坐标以及宽高;
Figure PCTCN2020099268-appb-000006
为第i个预测框的位置预测值,
Figure PCTCN2020099268-appb-000007
是第j个真实框的位置;k表示真实框的类别,即预测受损部位对应的修复类别。由于
Figure PCTCN2020099268-appb-000008
的存在,所以定位损失仅针对正样本进行计算。
其中,置信度损失L conf是计算所有修复类别的置信度的softmax损失,输入为每一修复类别的置信度的预测值,置信度损失L conf定义为:
Figure PCTCN2020099268-appb-000009
其中,
Figure PCTCN2020099268-appb-000010
式中,i表示第i个预测框,j表示第j个真实框,
Figure PCTCN2020099268-appb-000011
是指示参数,
Figure PCTCN2020099268-appb-000012
时表示第i个预测框与第j个真实框关于概率p匹配;
Figure PCTCN2020099268-appb-000013
用于表示属于背景的预测框;Pos表示正样本的个数,Neg表示负样本的个数。
步骤S105、根据所述预测损失值调整所述初始车损预测模型中的参数,以得到训练好的车损预测模型。
具体的,根据所述验证集验证训练的初始车损预测模型。
在具体的实现过程中,将所述验证集中的训练样本图像输入所述初始车损预测模型中,输出对应的预测框;其中,所述预测框包括预测的受损部位与预测的修复类别。然后根据所述受损部位和所述受损标注部位,以及所述修复类别和所述修复标注类别计算预测损失值。
示例性的,若所述预测损失值小于或者等于预设损失值,则训练结束。若预测损失值大于所述预设损失值,则增加各个受损部位对应的车损样本图像的数量并重新执行上述步骤S102至步骤S105,直至训练的初始车损预测模型的预测损失值小于或者等于预设损失值,则训练结束,得到训练好的车损预测模型。
可以理解的,增加各个受损部位对应的车损样本图像的数量,可以改变所述训练集中的训练样本图像,进而调整所述初始车损预测模型在预测过程中的预测框的参数,例如预测框的置信度损失与定位损失。
上述实施例提供的训练方法,通过对车损样本图像进行预处理,可以有效提高车损样本图像在车损预测模型中的特征提取的准确率,提高训练的精确度;通过将训练样本图像分为训练集与验证集,可以根据预测损失值不断调整初始车损预测模型的参数,可以提高训练好的车损预测模型的预测准确度与鲁棒性。
请参阅图6,步骤S20中基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,包括步骤S21至步骤S25。
步骤S21、将所述待预测图像输入训练好的车损预测模型进行卷积处理,得到所述待预测图像对应的特征图。
示例性的,所述服务器将所述待预测图像输入训练好的车损预测模型中,所述待预测图像经卷积层Conv4_3、Conv7,Conv8_2,Conv9_2,Conv10_2,Conv11_2进行卷积,各卷积层分别用两个不同的3×3的卷积核进行卷积;其中一个卷积核输出的特征图用于计算类别的置信度,另一个卷积核输出的特征图用于计算回归的定位。
步骤S22、根据预设的多个预测框对所述特征图进行车损预测,得到各所述预测框的类别与置信度,所述类别包括受损部位和修复类别。
示例的,对于每个特征图,使用预设的多个预测框进行检测。
可以理解的,预设的预测框是先验框的实际选取,所述预测框用于预测所述特征图的类别。所述先验框用于确定训练所述车损预测模型的训练样本。
具体地,对于每个预测框,首先根据置信度对应的类别确定预测框的类别。示例性的,最大值的置信度对应的类别为预测框的类别。
示例性的,所述类别包括受损部位和修复类别,例如损坏的门把手,需要更换、刮花的车门,需要补漆、刮花的左前门,需要补漆、刮花的右前门,需要补漆、刮花的左叶子板,需要补漆、损坏的前保险杠,需要维修等。
在一些实施例中,若有预测框与多个类别匹配,例如预测框A与类别1匹配时的置信度a1为0.95,预测框A与类别2匹配时的置信度b1为0.85;由于置信度a1大于置信度b1,因此所述预测框A对应的类别为类别1。若所述类别1为刮花的左前门,需要补漆,则所述预测框A的类别为刮花的左前门,需要补漆。
步骤S23、从置信度大于置信度阈值的预测框中确定预设数目的待选预测框。
具体地,所述服务器根据置信度阈值对已确定类别的预测框进行筛选,得到筛选后的预测框。
示例性的,所述置信度阈值可以是0.8。将置信度低于0.8的预测框过滤掉,保留置信度大于或等于0.8的预测框。
具体地,所述服务器从所述筛选后的预测框中确定预设数目的待选预测框。
示例性的,对所述筛选后的预测框按照置信度进行降序排列,根据所述预设数目将排列在前的k个预测框保留,其余的预测框删除。其中,k表示所述预设数目,例如k=3。
需要说明的,若所述筛选后的预测框的个数不大于所述预设数目,则将所述筛选后的预测框全部确定为待选预测框。
在一些实施例中,若类别1对应的筛选后的预测框有8个;所述服务器对所述预测框进行置信度降序排序,将置信度最大的3个预测框保留,其余的预测框剔除,得到3个待选预测框。
步骤S24、计算不同待选预测框之间的重叠度,将重叠度大于重叠度阈值的待选预测 框过滤掉,得到目标预测框。
具体地,所述服务器计算不同待选预测框之间的重叠度。
所述重叠度是指两个待选预测框的相交面积与该两个待选预测框的相并面积之比,所述重叠度可以交并比(Intersection over Union,IOU)表示,IOU的计算公式:
Figure PCTCN2020099268-appb-000014
式中,A、B分别表示不同的两个待选预测框的面积。
具体地,所述服务器将重叠度大于重叠度阈值的待选预测框过滤掉,得到目标预测框。示例性的,所述重叠度阈值可以0.5。
在一些实施例中,所述服务器根据NMS算法将重叠度大于所述重叠度阈值的待选预测框过滤掉。通过NMS算法,将重叠度大于所述重叠度阈值的待选预测框剔除,保留一个最大置信度的待选预测框,即目标预测框。
需要说明的是,非极大值抑制(Non-Maximum Suppression,NMS)算法用于移除多余的待选预测框。
其中,NMS算法的具体步骤如下:
(1)将同一类别对应的待选预测框进行置信度排序,选出最大置信度与最大置信度对应的待选预测框。
(2)遍历剩余的待选预测框,计算剩余的待选预测框与最大置信度对应的待选预测框的重叠度,若存在重叠度大于所述重叠度阈值的待选预测框,将该待选预测框删除。
(3)从未处理的待选预测框中继续挑选一个最大置信度的待选预测框,重复步骤(1)和步骤(2),直至剩下一个待选预测框。
在本实施例中,若有两个待选预测框对应的类别相同,经过NMS算法可以将类别相同的两个待选预测框合并成一个。例如,若两个待选预测框对应的类别都是“左前门刮花,需要补漆”,则将相同类别的待选预测框合并成一个,得到的目标预测框的类别为“左前门刮花,需要补漆”。
步骤S25、根据所述目标预测框的类别确定所述受损车辆对应的车损信息。
由于上述步骤得到目标预测框,因此所述服务器可以根据得到的目标预测框对应的类别确定所述受损车辆对应的车损信息。所述车损信息可以用所述目标预测框的类别表示。
示例性的,若所述目标预测框的类别为“左前门刮花,需要补漆”,则所述服务器可以确定所述受损车辆对应的车损信息为“左前门刮花,需要补漆”,其中受损部位为“左前门刮花”,修复类别为“需要补漆”。
在一些实施例中,如图7所示,所述服务器将待预测图像输入训练好的车损预测模型,该车损预测模型输出包括两个待选预测框,分别标记为box1和box2。其中box1的类别为“右后门刮花,需补漆”;box2的类别为“右前门刮花,需补漆”。
通过车损预测模型对待预测图像进行车损预测,可以准确得到受损车辆对应的车损信息;车损信息包括受损部位和修复类别,可以为定损提供更丰富的信息。在车损预测模型 中采用GPU集群进行计算,可以快速处理大量数据,缩短车损预测模型的检测时间,可以快速得到受损车辆对应的车损信息。
步骤S30、获取与所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
具体地,所述服务器在查询所述受损部位和修复类别对应的维修信息之前,需要获取所述受损车辆对应的车型。
在一些实施例中,所述服务器可以通过控制所述终端弹出提示框或发出语音提示进行提醒用户输入识别信息,根据所述识别信息获取对应的投保单号,从而根据所述投保单号获取所述受损车辆的车型。
需要说明的是,所述识别信息可以包括车牌号码和VIN(Vehicle Identification Number)识别码。VIN识别码由17位字母、数字组成,是制造厂为了识别辆车而指定的一组字码,具有对车辆的唯一识别性。
示例性的,车型可以包括小型车、中型车或大型车等。例如,所述受损车辆对应的车型可以为大型车。
通过提醒用户输入识别信息,可以获取受损车辆的车型,可以根据受损车辆的车型从数据库中获取与该车型对应的维修信息表。
在另一些实施例中,所述服务器可以通过所述终端提示用户输入所述受损车辆的车辆。
示例性的,所述维修信息表可以包括维修价格表。
具体地,所述服务器根据所述受损车辆对应的车型,从数据库中获取与所述车型对应的维修信息表。示例性的,所述服务器从数据库中获取与所述车型对应的维修价格表,如表1所示:
表1为不同车型对应的维修价格表
车型 维修价格表
微型车/小型车/紧凑型车 a
中型车/中大型车/大型车 b
SUV车型/MPV车型 c
皮卡/微面/轻客 d
表中,SUV(Sport Utility Vehicle)是指运动型多用途汽车,MPV(Multi-Purpose Vehicles)是指多用途汽车。
在一些实施例中,若所述服务器确定所述受损车辆对应的车型为大型车,则所述服务器可以从所述数据库中得到与所述车型对应的维修价格表b。
具体地,所述服务器根据所述受损部位和修复类别在所述维修信息表中查询修复所述受损部位的维修信息。
示例性的,所述数据库中的大型车对应的维修价格表b,如表2所示。
表2为大型车对应的维修价格表b
Figure PCTCN2020099268-appb-000015
表中,类别是指修复类别,可以包括补漆、更换和维修;部位是指受损部位,可以包括车门、后尾箱、减震器、制动盘、制动片和发动机等车辆部位。
在一些实施例中,若受损部位为“右后门受损”,所述修复类别为“需补漆”,所述服务器从所述维修价格表b中查询所述受损部位的维修价格,例如所述受损部位对应的维修价格为100元/次。若受损部位为“减震器损坏”,所述修复类别为“需更换”,所述服务器从所述维修价格表b中查询所述受损部位的维修价格,例如所述受损部位对应的维修价格为380元/个。
具体地,所述服务器根据所述受损部位的维修信息生成所述受损车辆的定损结果,所述定损结果包括维修价值。
示例性的,所述服务器根据所述受损部位的维修价格,计算所述受损部位对应的维修价值。
在一些实施例中,若所述受损部位为“右后门受损”和“右前门受损”,对应的修复类别都是“需补漆”,则维修价格为100+100=200元。若所述受损部位还包括“减震器损坏四个”,对应的修复类别为“需更换”,则维修价值为200+380×4=1720元。
具体地,在计算所述受损部位的维修价值之后,根据所述车损信息和所述受损部位对应的维修价值,生成所述受损车辆对应的定损结果,并将所述定损结果向所述受损车辆对应的终端发送。
示例性的,所述定损结果可以包括:右后门受损和右前门受损,需补漆,更换减震器四个,维修价值1720元。
在一些实施例中,在将所述定损结果向所述受损车辆对应的终端发送之前,所述服务器还可以根据所述受损车辆对应的投保单号,获取所述受损车辆的保费数据,得到所述受损车辆的年度保费增额。
可以理解的,所述年度保费增额是指受损车辆进行车险定损后,下一年保费的增加额度。需要说明的,若用户选择车险定损,则定损金额等于维修价值。
所述服务器根据所述定损金额和所述年度保费增额进行比较。示例性的,若所述定损金额大于所述年度保费增额,则输出“建议定损”的推荐意见,建议用户进行车辆定损; 若所述定损金额小于所述年度保费增额,则输出“建议不定损”推荐意见,建议用户不进行车辆定损。
在本实施例中,所述服务器将所述定损结果发送到所述终端时,可以将所述推荐意见加入到所述定损结果中向所述终端发送,用户通过所述终端获取到所述定损结果与所述推荐意见。
具体地,如图8所示,步骤S30将所述定损结果发送到所述终端之后,还包括步骤S40至步骤S70。
步骤S40、若从所述终端获取定损确认信息,则获取所述终端的位置,所述定损确认信息是所述终端根据用户对所述定损结果的确认操作发送的。
示例性的,若所述终端接收到所述服务器发送的所述定损结果,所述终端在显示屏中显示所述定损结果。
在一些实施例中,若所述定损结果中的推荐意见为“建议定损”,则用户可以在所述终端上点击或选择“建议定损”这一选项,所述终端根据所述用户的定损确认操作向所述服务器发送一个定损确认信息。
具体地,若所述服务器获取到所述终端响应于用户对所述定损结果作出的定损确认操作发送的定损确认信息,则获取所述终端的位置信息。
示例性的,所述终端可以通过全球导航系统、北斗卫星导航系统、GLONASS定位系统或伽利略卫星导航系统确定所述终端的位置信息。
步骤S50、根据所述终端的位置确定位于所述受损车辆预设范围内的若干维修点,并获取各所述维修点的维修点信息。
具体地,所述服务器根据所述终端的位置信息搜索在所述受损车辆预设范围内的维修点。其中,所述预设范围可以是10Km。
在一些实施例中,所述服务器获取各所述维修点的维修点信息。其中,所述维修点信息可以包括各所述维修点的距离、维修价格和服务评分。
在本实施例中,所述服务器对所述预设范围内的维修点进行定位,获取若干维修点的名称与距离。该距离是指所述终端到维修点的距离。
具体地,所述服务器对若干所述维修点的距离进行距离划分。示例性的,若距离小于3Km,属于近距离;若距离为3Km~8Km,属于中距离;若距离为8Km~10Km,属于远距离。
在本实施例中,所述服务器将若干所述维修点的名称与距离等数据输入大数据模型,以获取若干所述维修点对应的维修信息和服务评分。
需要说明的是,大数据模型可以通过降维、回归、聚类、分类和关联等操作处理维修点的数据。通过一系列操作处理,大数据模型可以输出若干所述维修点的相关数据,例如所述维修点的维修价格数据和服务评分数据。
示例性的,所述服务器通过大数据模型可以获取若干所述维修点的维修价格和服务评分。其中,所述维修价格可以包括高、中、低三个级别;所述服务评分可以包括高、中、低三个级别。
示例性的,维修点A的维修点信息可以是近距离、维修价格为中级别、服务评价为高级别。
步骤S60、基于预设的维修点排序表,根据各所述维修点的维修点信息确定各所述维修点的推荐分值。
具体地,基于预设的维修点排序表,所述服务器根据各所述维修点的维修点信息计算各所述维修点的推荐分值,生成各所述维修点对应的维修推荐列表。
示例性的,所述维修点排序表包括距离、维修价格和服务评分三个类型以及各类型对应等级的分值,如表3所示:
表3为维修点排序表
Figure PCTCN2020099268-appb-000016
表中,权重比为距离:维修价格:服务评分=3:4:3。
具体地,所述服务器根据加权算法,计算出各所述维修点的推荐分值,并根据各所述维修点以及各所述维修点对应的推荐分值生成推荐分值表。
在一些实施例中,若某个维修点的维修点信息为:距离为近级别,维修价格为中级别,服务评分为高级别,所述服务器根据加权算法计算得到该维修点的推荐分值:100×0.3+80×0.4+100×0.3=92。
示例性的,所述服务器根据各所述维修点以及各所述维修点对应的推荐分值生成推荐分值表,如表4所示。
表4为推荐分值表
维修点 推荐分值(分)
维修点A 92
维修点B 88
维修点C 72
维修点D 60
步骤S70、根据所述推荐分值向所述终端推送至少一个维修点的维修点信息。
具体地,所述服务器根据所述推荐分值生成所述受损车辆对应的维修推荐列表,并将所述维修推荐列表中的维修点对应的维修点信息推送到所述终端。
在一些实施例中,所述服务器将推荐分值低于预设阈值的维修点从所述推荐分值表删除,得到所述受损车辆对应的维修推荐列表。
示例性的,所述预设分值可以是80分。
在本实施例中,若维修点C的分值为72分,维修点D的分值为60分,两者都低于所述预设分值80分,则将维修点C和D从所述推荐分值表中删除,得到所述推荐分值表对应 的维修推荐列表,如表5所示:
表5为维修推荐列表
维修点 推荐分值(分)
维修点A 92
维修点B 88
具体地,所述服务器可以将所述维修推荐列表中的至少一个维修点的维修点信息向所述终端推送。
在一些实施例中,所述服务器可以将所述维修点A的距离、维修价格以及服务评分等维修点信息推送给所述终端。所述服务器也可以将所述维修点A与所述维修点B的距离、维修价格以及服务评分等维修点信息一起推送给所述终端。
通过获取维修点的距离、维修价格和服务评分,并根据预设的维修点排序表得到维修推荐列表;然后根据维修推荐列表向用户推荐维修点,让用户选择合适的维修点,推荐准确率高,有助于提高用户体验度。
上述实施例提供的车辆定损方法,通过在获取终端上传的车损图像时判断车损图像是否符合定损条件,可以确保车损图像的质量,提高定损结果的准确率;对车损图像进行预处理,可以提高车损预测结果;通过训练好的车损预测模型对待预测图像进行车损预测,可以准确得到受损车辆对应的车损信息,预测效率较高,节省定损时间;之后根据维修信息表可以得到车损信息中的受损部位与修复类别对应的维修信息,进而得到定损结果;通过获取各维修点的距离、维修价格和服务评分,并根据预设的维修点排序表得到维修推荐列表,向用户推荐合适的维修点,又好又快地解决车辆的定损和维修两个难题,节省时间和提高了用户的体验度。
请参阅图9,图9是本申请的实施例还提供一种车辆定损装置200的示意性框图,该车辆定损装置用于执行前述的车辆定损方法。其中,该车辆定损装置可以配置于服务器或终端中。
如图9所示,该车辆定损装置200,包括:图像获取模块201、车损预测模块202、定损生成模块203。
图像获取模块201,用于获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位。
在一些实施例中,如图10所示,该图像获取模块201,包括:图像获取子模块2011、判断子模块2012、归一化子模块2013、亮度子模块2014和对比度子模块2015。
图像获取子模块2011,用于从所述终端获取所述终端拍摄的图像以及所述图像的拍摄参数。
判断子模块2012,用于根据所述拍摄参数和所述图像判断所述图像是否符合定损条件,若所述图像符合所述定损条件,将所述图像确定为所述车损图像;若所述图像不符合定损条件,根据所述图像确定拍摄提示,并将所述拍摄提示发送给所述终端。
归一化子模块2013,用于对所述车损图像进行归一化处理,得到归一化后的图像。
亮度子模块2014,用于对所述归一化后的图像进行亮度均衡处理,得到亮度均衡后的图像。
对比度子模块2015,用于对所述亮度均衡后的图像进行对比度增强处理,得到待预测图像。
车损预测模块202,用于基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别。
在一些实施例中,如图10所示,该车损预测模块202,包括:卷积子模块2021、车损预测子模块2022、预测框确定子模块2023、重叠度计算子模块2024和车损确定子模块2025。
卷积子模块2021,用于将所述待预测图像输入训练好的车损预测模型进行卷积处理,得到所述待预测图像对应的特征图。
车损预测子模块2022,用于根据预设的多个预测框对所述特征图进行车损预测,得到各所述预测框的类别与置信度,所述类别包括受损部位和修复类别。
预测框确定子模块2023,用于从置信度大于置信度阈值的预测框中确定预设数目的待选预测框。
重叠度计算子模块2024,用于计算不同待选预测框之间的重叠度,将重叠度大于重叠度阈值的待选预测框过滤掉,得到目标预测框。
车损确定子模块2025,用于根据所述目标预测框的类别确定所述受损车辆对应的车损信息。
定损生成模块203,用于获取所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
在一些实施例中,如图10所示,该定损生成模块203,包括:车型获取子模块2031、维修查询子模块2032和定损计算子模块2033。
车型获取子模块2031,用于获取所述受损车辆对应的车型,从数据库中获取与所述车型对应的维修信息表。
维修查询子模块2032,用于根据所述受损部位和修复类别在所述维修信息表中查询修复所述受损部位的维修信息。
定损计算子模块2033,用于根据所述受损部位的维修信息生成所述受损车辆的定损结果。
在一些实施例中,如图9所示,该车辆定损装置200,还包括:位置获取模块204、信息获取模块205、分值生成模块206和推送模块207。
位置获取模块204,用于若从所述终端获取定损确认信息,则获取所述终端的位置,所述定损确认信息是所述终端根据用户对所述定损结果的确认操作发送的。
信息获取模块205,用于根据所述终端的位置确定位于所述受损车辆预设范围内的若干维修点,并获取各所述维修点的维修点信息。
分值生成模块206,用于基于预设的维修点排序表,根据各所述维修点的维修点信息确定各所述维修点的推荐分值。
推送模块207,用于根据所述推荐分值向所述终端推送至少一个维修点的维修点信息。
在一些实施例中,如图9所示,该车辆定损装置200,还包括:模型确定模块208、样本图像获取模块209、车损训练模块210、损失值计算模块211和参数调整模块212。
模型确定模块208,用于确定初始车损预测模型。
样本图像获取模块209,用于获取车损样本图像和所述车损样本图像的标注信息,对所述车损样本图像进行预处理得到训练样本图像,所述车损样本图像包括受损车辆的受损部位,所述标注信息包括受损标注部位和修复标注类别。
车损训练模块210,用于将所述训练样本图像输入所述初始车损预测模型,得到所述训练样本图像对应的车损信息,所述车损信息包括受损部位和修复类别。
损失值计算模块211,用于根据所述受损部位和所述受损标注部位,以及所述修复类别和所述修复标注类别计算预测损失值。
参数调整模块212,用于根据所述预测损失值调整所述初始车损预测模型中的参数,以得到训练好的车损预测模型。
上述的装置可以实现为一种计算机程序的形式,该计算机程序可以在如图11所示的计算机设备上运行。
请参阅图11,图11是本申请实施例提供的一种计算机设备的结构示意性框图。该计算机设备可以是服务器。
请参阅图11,该计算机设备包括通过系统总线连接的处理器和存储器,其中,存储器可以包括非易失性存储介质和内存储器。
处理器用于提供计算和控制能力,支撑整个计算机设备的运行。
内存储器为非易失性存储介质中的计算机程序的运行提供环境,该计算机程序被处理器执行时,可使得处理器执行任意一种车辆定损方法。
应当理解的是,处理器可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。其中,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
其中,在一个实施例中,所述处理器用于运行存储在存储器中的计算机程序,以实现如下步骤:
获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位;基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别;获取与所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
在一个实施例中,所述处理器在实现获取终端上传的车损图像时,用于实现:
从所述终端获取所述终端拍摄的图像以及所述图像的拍摄参数;根据所述拍摄参数和所述图像判断所述图像是否符合定损条件;若所述图像符合所述定损条件,将所述图像确定为所述车损图像;若所述图像不符合定损条件,根据所述图像确定拍摄提示,并将所述拍摄提示发送给所述终端。
在一个实施例中,所述处理器在实现对所述车损图像进行预处理得到待预测图像时,用于实现:
对所述车损图像进行归一化处理,得到归一化后的图像;对所述归一化后的图像进行亮度均衡处理,得到亮度均衡后的图像;对所述亮度均衡后的图像进行对比度增强处理,得到待预测图像。
在一个实施例中,所述处理器在实现基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息时,用于实现:
将所述待预测图像输入训练好的车损预测模型进行卷积处理,得到所述待预测图像对应的特征图;根据预设的多个预测框对所述特征图进行车损预测,得到各所述预测框的类别与置信度,所述类别包括受损部位和修复类别;从置信度大于置信度阈值的预测框中确定预设数目的待选预测框;计算不同待选预测框之间的重叠度,将重叠度大于重叠度阈值的待选预测框过滤掉,得到目标预测框;根据所述目标预测框的类别确定所述受损车辆对应的车损信息。
在一个实施例中,所述处理器在实现获取所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果时,用于实现:
获取所述受损车辆对应的车型,从数据库中获取与所述车型对应的维修信息表;根据所述受损部位和修复类别在所述维修信息表中查询修复所述受损部位的维修信息;根据所述受损部位的维修信息生成所述受损车辆的定损结果。
在一个实施例中,所述处理器在实现将所述定损结果发送到所述终端之后之后,还用于实现:
若从所述终端获取定损确认信息,则获取所述终端的位置,所述定损确认信息是所述终端根据用户对所述定损结果的确认操作发送的;根据所述终端的位置确定位于所述受损车辆预设范围内的若干维修点,并获取各所述维修点的维修点信息;基于预设的维修点排序表,根据各所述维修点的维修点信息确定各所述维修点的推荐分值;根据所述推荐分值向所述终端推送至少一个维修点的维修点信息。
在一个实施例中,所述处理器在实现基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息之前,还用于实现:
确定初始车损预测模型;获取车损样本图像和所述车损样本图像的标注信息,对所述车损样本图像进行预处理得到训练样本图像,所述车损样本图像包括受损车辆的受损部位,所述标注信息包括受损标注部位和修复标注类别;将所述训练样本图像输入所述初始车损预测模型,得到所述训练样本图像对应的车损信息,所述车损信息包括受损部位和修复类别;根据所述受损部位和所述受损标注部位,以及所述修复类别和所述修复标注类别计算 预测损失值;根据所述预测损失值调整所述初始车损预测模型中的参数,以得到训练好的车损预测模型。
本申请的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现本申请实施例提供的任一项车辆定损方法。其中,所述计算机可读存储介质可以是非易失性,也可以是易失性。
其中,所述计算机可读存储介质可以是前述实施例所述的计算机设备的内部存储单元,例如所述计算机设备的硬盘或内存。所述计算机可读存储介质也可以是所述计算机设备的外部存储设备,例如所述计算机设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字卡(Secure Digital Card,SD Card),闪存卡(Flash Card)等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种车辆定损方法,其中,包括:
    获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位;
    基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别;
    获取与所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
  2. 根据权利要求1所述的车辆定损方法,其中,所述获取终端上传的车损图像,包括:
    从所述终端获取所述终端拍摄的图像以及所述图像的拍摄参数;
    根据所述拍摄参数和所述图像判断所述图像是否符合定损条件;
    若所述图像符合所述定损条件,将所述图像确定为所述车损图像;
    若所述图像不符合定损条件,根据所述图像确定拍摄提示,并将所述拍摄提示发送给所述终端。
  3. 根据权利要求1所述的车辆定损方法,其中,所述对所述车损图像进行预处理得到待预测图像,包括:
    对所述车损图像进行归一化处理,得到归一化后的图像;
    对所述归一化后的图像进行亮度均衡处理,得到亮度均衡后的图像;
    对所述亮度均衡后的图像进行对比度增强处理,得到待预测图像。
  4. 根据权利要求1-3中任一项所述的车辆定损方法,其中,所述基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,包括:
    将所述待预测图像输入训练好的车损预测模型进行卷积处理,得到所述待预测图像对应的特征图;
    根据预设的多个预测框对所述特征图进行车损预测,得到各所述预测框的类别与置信度,所述类别包括受损部位和修复类别;
    从置信度大于置信度阈值的预测框中确定预设数目的待选预测框;
    计算不同待选预测框之间的重叠度,将重叠度大于重叠度阈值的待选预测框过滤掉,得到目标预测框;
    根据所述目标预测框的类别确定所述受损车辆对应的车损信息。
  5. 根据权利要求1中所述的车辆定损方法,其中,所述获取所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,包括:
    获取所述受损车辆对应的车型,从数据库中获取与所述车型对应的维修信息表;
    根据所述受损部位和修复类别在所述维修信息表中查询修复所述受损部位的维修信息;
    根据所述受损部位的维修信息生成所述受损车辆的定损结果,所述定损结果包括维修价值。
  6. 根据权利要求1中所述的车辆定损方法,其中,所述将所述定损结果发送到所述终 端之后,还包括:
    若从所述终端获取定损确认信息,则获取所述终端的位置,所述定损确认信息是所述终端根据用户对所述定损结果的确认操作发送的;
    根据所述终端的位置确定位于所述受损车辆预设范围内的若干维修点,并获取各所述维修点的维修点信息;
    基于预设的维修点排序表,根据各所述维修点的维修点信息确定各所述维修点的推荐分值;
    根据所述推荐分值向所述终端推送至少一个维修点的维修点信息。
  7. 根据权利要求1中所述的车辆定损方法,其中,所述基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息之前,还包括:
    确定初始车损预测模型;
    获取车损样本图像和所述车损样本图像的标注信息,对所述车损样本图像进行预处理得到训练样本图像,所述车损样本图像包括受损车辆的受损部位,所述标注信息包括受损标注部位和修复标注类别;
    将所述训练样本图像输入所述初始车损预测模型,得到所述训练样本图像对应的车损信息,所述车损信息包括受损部位和修复类别;
    根据所述受损部位和所述受损标注部位,以及所述修复类别和所述修复标注类别计算预测损失值;
    根据所述预测损失值调整所述初始车损预测模型中的参数,以得到训练好的车损预测模型。
  8. 一种车辆定损装置,其中,包括:
    图像获取模块,用于获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位;
    车损预测模块,用于基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别;
    定损生成模块,用于获取所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
  9. 一种计算机设备,其中,所述计算机设备包括存储器和处理器;
    所述存储器,用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时实现以下步骤:
    获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位;
    基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别;
    获取与所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
  10. 根据权利要求9所述的计算机设备,其中,所述处理器用于:
    从所述终端获取所述终端拍摄的图像以及所述图像的拍摄参数;
    根据所述拍摄参数和所述图像判断所述图像是否符合定损条件;
    若所述图像符合所述定损条件,将所述图像确定为所述车损图像;
    若所述图像不符合定损条件,根据所述图像确定拍摄提示,并将所述拍摄提示发送给所述终端。
  11. 根据权利要求9所述的计算机设备,其中,所述处理器用于:
    对所述车损图像进行归一化处理,得到归一化后的图像;
    对所述归一化后的图像进行亮度均衡处理,得到亮度均衡后的图像;
    对所述亮度均衡后的图像进行对比度增强处理,得到待预测图像。
  12. 根据权利要求9-11中任一项所述的计算机设备,其中,所述处理器用于:
    将所述待预测图像输入训练好的车损预测模型进行卷积处理,得到所述待预测图像对应的特征图;
    根据预设的多个预测框对所述特征图进行车损预测,得到各所述预测框的类别与置信度,所述类别包括受损部位和修复类别;
    从置信度大于置信度阈值的预测框中确定预设数目的待选预测框;
    计算不同待选预测框之间的重叠度,将重叠度大于重叠度阈值的待选预测框过滤掉,得到目标预测框;
    根据所述目标预测框的类别确定所述受损车辆对应的车损信息。
  13. 根据权利要求9中所述的计算机设备,其中,所述处理器用于:
    获取所述受损车辆对应的车型,从数据库中获取与所述车型对应的维修信息表;
    根据所述受损部位和修复类别在所述维修信息表中查询修复所述受损部位的维修信息;
    根据所述受损部位的维修信息生成所述受损车辆的定损结果,所述定损结果包括维修价值。
  14. 根据权利要求9中所述的计算机设备,其中,所述处理器用于:
    若从所述终端获取定损确认信息,则获取所述终端的位置,所述定损确认信息是所述终端根据用户对所述定损结果的确认操作发送的;
    根据所述终端的位置确定位于所述受损车辆预设范围内的若干维修点,并获取各所述维修点的维修点信息;
    基于预设的维修点排序表,根据各所述维修点的维修点信息确定各所述维修点的推荐分值;
    根据所述推荐分值向所述终端推送至少一个维修点的维修点信息。
  15. 根据权利要求9中所述的计算机设备,其中,所述处理器用于:
    确定初始车损预测模型;
    获取车损样本图像和所述车损样本图像的标注信息,对所述车损样本图像进行预处理得到训练样本图像,所述车损样本图像包括受损车辆的受损部位,所述标注信息包括受损 标注部位和修复标注类别;
    将所述训练样本图像输入所述初始车损预测模型,得到所述训练样本图像对应的车损信息,所述车损信息包括受损部位和修复类别;
    根据所述受损部位和所述受损标注部位,以及所述修复类别和所述修复标注类别计算预测损失值;
    根据所述预测损失值调整所述初始车损预测模型中的参数,以得到训练好的车损预测模型。
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行时,用于实现以下步骤:
    获取终端上传的车损图像,对所述车损图像进行预处理得到待预测图像,所述车损图像包括所述终端拍摄的受损车辆的受损部位;
    基于车损预测模型,根据所述待预测图像确定所述受损车辆对应的车损信息,所述车损信息包括受损部位和修复类别;
    获取与所述受损部位和修复类别对应的维修信息,根据所述维修信息确定所述受损车辆的定损结果,并将所述定损结果发送到所述终端。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:
    从所述终端获取所述终端拍摄的图像以及所述图像的拍摄参数;
    根据所述拍摄参数和所述图像判断所述图像是否符合定损条件;
    若所述图像符合所述定损条件,将所述图像确定为所述车损图像;
    若所述图像不符合定损条件,根据所述图像确定拍摄提示,并将所述拍摄提示发送给所述终端。
  18. 根据权利要求16所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:
    对所述车损图像进行归一化处理,得到归一化后的图像;
    对所述归一化后的图像进行亮度均衡处理,得到亮度均衡后的图像;
    对所述亮度均衡后的图像进行对比度增强处理,得到待预测图像。
  19. 根据权利要求16-18中任一项所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:
    将所述待预测图像输入训练好的车损预测模型进行卷积处理,得到所述待预测图像对应的特征图;
    根据预设的多个预测框对所述特征图进行车损预测,得到各所述预测框的类别与置信度,所述类别包括受损部位和修复类别;
    从置信度大于置信度阈值的预测框中确定预设数目的待选预测框;
    计算不同待选预测框之间的重叠度,将重叠度大于重叠度阈值的待选预测框过滤掉,得到目标预测框;
    根据所述目标预测框的类别确定所述受损车辆对应的车损信息。
  20. 根据权利要求15中所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:
    获取所述受损车辆对应的车型,从数据库中获取与所述车型对应的维修信息表;
    根据所述受损部位和修复类别在所述维修信息表中查询修复所述受损部位的维修信息;
    根据所述受损部位的维修信息生成所述受损车辆的定损结果,所述定损结果包括维修价值。
PCT/CN2020/099268 2020-01-13 2020-06-30 车辆定损方法、装置、计算机设备和存储介质 WO2021143063A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010032163.3A CN111311540A (zh) 2020-01-13 2020-01-13 车辆定损方法、装置、计算机设备和存储介质
CN202010032163.3 2020-01-13

Publications (1)

Publication Number Publication Date
WO2021143063A1 true WO2021143063A1 (zh) 2021-07-22

Family

ID=71159798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099268 WO2021143063A1 (zh) 2020-01-13 2020-06-30 车辆定损方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN111311540A (zh)
WO (1) WO2021143063A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311540A (zh) * 2020-01-13 2020-06-19 平安科技(深圳)有限公司 车辆定损方法、装置、计算机设备和存储介质
CN111488875B (zh) * 2020-06-24 2021-04-13 爱保科技有限公司 基于图像识别的车险理赔核损方法、装置和电子设备
CN111881856B (zh) * 2020-07-31 2023-01-31 蚂蚁胜信(上海)信息技术有限公司 基于图像的车辆定损方法和装置
CN112085721A (zh) * 2020-09-07 2020-12-15 中国平安财产保险股份有限公司 基于人工智能的水淹车定损方法、装置、设备及存储介质
CN111931746B (zh) * 2020-10-09 2021-02-12 深圳壹账通智能科技有限公司 一种车损判定方法、装置、计算机设备及可读存储介质
CN112348799B (zh) * 2020-11-11 2021-07-13 德联易控科技(北京)有限公司 车辆定损方法、装置、终端设备及存储介质
CN112270678B (zh) * 2020-11-18 2024-09-03 德联易控科技(北京)有限公司 图像处理方法以及处理系统
CN112712498B (zh) * 2020-12-25 2024-09-13 北京百度网讯科技有限公司 移动终端执行的车辆定损方法、装置、移动终端、介质
CN117456473B (zh) * 2023-12-25 2024-03-29 杭州吉利汽车数字科技有限公司 车辆装配检测方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780048A (zh) * 2016-11-28 2017-05-31 中国平安财产保险股份有限公司 一种智能车险的自助理赔方法、自助理赔装置及系统
CN109325531A (zh) * 2018-09-17 2019-02-12 平安科技(深圳)有限公司 基于图像的车辆定损方法、装置、设备及存储介质
CN109948811A (zh) * 2019-01-31 2019-06-28 德联易控科技(北京)有限公司 车辆定损的处理方法、装置及电子设备
CN110287768A (zh) * 2019-05-06 2019-09-27 浙江君嘉智享网络科技有限公司 图像智能识别车辆定损方法
CN110674788A (zh) * 2019-10-09 2020-01-10 北京百度网讯科技有限公司 车辆定损方法和装置
CN111311540A (zh) * 2020-01-13 2020-06-19 平安科技(深圳)有限公司 车辆定损方法、装置、计算机设备和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764046A (zh) * 2018-04-26 2018-11-06 平安科技(深圳)有限公司 车辆损伤分类模型的生成装置、方法及计算机可读存储介质
CN108734702A (zh) * 2018-04-26 2018-11-02 平安科技(深圳)有限公司 车损判定方法、服务器及存储介质
CN109711474B (zh) * 2018-12-24 2023-01-17 中山大学 一种基于深度学习的铝材表面缺陷检测算法
CN110060233B (zh) * 2019-03-20 2022-03-18 中国农业机械化科学研究院 一种玉米果穗破损检测方法
CN110245689A (zh) * 2019-05-23 2019-09-17 杭州有容智控科技有限公司 基于机器视觉的盾构刀具识别与定位检测方法
CN110363238A (zh) * 2019-07-03 2019-10-22 中科软科技股份有限公司 智能车辆定损方法、系统、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780048A (zh) * 2016-11-28 2017-05-31 中国平安财产保险股份有限公司 一种智能车险的自助理赔方法、自助理赔装置及系统
CN109325531A (zh) * 2018-09-17 2019-02-12 平安科技(深圳)有限公司 基于图像的车辆定损方法、装置、设备及存储介质
CN109948811A (zh) * 2019-01-31 2019-06-28 德联易控科技(北京)有限公司 车辆定损的处理方法、装置及电子设备
CN110287768A (zh) * 2019-05-06 2019-09-27 浙江君嘉智享网络科技有限公司 图像智能识别车辆定损方法
CN110674788A (zh) * 2019-10-09 2020-01-10 北京百度网讯科技有限公司 车辆定损方法和装置
CN111311540A (zh) * 2020-01-13 2020-06-19 平安科技(深圳)有限公司 车辆定损方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN111311540A (zh) 2020-06-19

Similar Documents

Publication Publication Date Title
WO2021143063A1 (zh) 车辆定损方法、装置、计算机设备和存储介质
US11443288B2 (en) Automatic assessment of damage and repair costs in vehicles
TWI726364B (zh) 電腦執行的車輛定損方法及裝置
TWI742382B (zh) 透過電腦執行的、用於車輛零件識別的神經網路系統、透過神經網路系統進行車輛零件識別的方法、進行車輛零件識別的裝置和計算設備
CN109815997B (zh) 基于深度学习的识别车辆损伤的方法和相关装置
US20240087102A1 (en) Automatic Image Based Object Damage Assessment
CN109684922B (zh) 一种基于卷积神经网络的多模型对成品菜的识别方法
CN106778788B (zh) 对图像进行美学评价的多特征融合方法
CN112862702B (zh) 图像增强方法、装置、设备及存储介质
CN109583483B (zh) 一种基于卷积神经网络的目标检测方法和系统
CN111126224A (zh) 车辆检测方法及分类识别模型训练方法
CN108009543A (zh) 一种车牌识别方法及装置
CN109726746B (zh) 一种模板匹配的方法及装置
US20200104940A1 (en) Artificial intelligence enabled assessment of damage to automobiles
US11410287B2 (en) System and method for artificial intelligence based determination of damage to physical structures
CN107480599B (zh) 一种基于深度学习算法的共享单车还车管理方法
CN111814852B (zh) 图像检测方法、装置、电子设备和计算机可读存储介质
CN109815923B (zh) 基于lbp特征与深度学习的金针菇菇头分选识别方法
CN116740652B (zh) 一种基于神经网络模型的锈斑面积扩大的监测方法与系统
CN107301421A (zh) 车辆颜色的识别方法及装置
CN114155363A (zh) 换流站车辆识别方法、装置、计算机设备和存储介质
CN117576009A (zh) 一种基于改进YOLOv5s的高精度太阳能电池板缺陷检测方法
TW201419168A (zh) 不均勻光線下的車牌辨識方法及系統
CN112926610B (zh) 车牌图像筛选模型的构建方法与车牌图像筛选方法
CN114332715A (zh) 气象自动观测积雪识别方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20913292

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20913292

Country of ref document: EP

Kind code of ref document: A1