WO2021135500A1 - 车损检测模型训练、车损检测方法、装置、设备及介质 - Google Patents

车损检测模型训练、车损检测方法、装置、设备及介质 Download PDF

Info

Publication number
WO2021135500A1
WO2021135500A1 PCT/CN2020/120758 CN2020120758W WO2021135500A1 WO 2021135500 A1 WO2021135500 A1 WO 2021135500A1 CN 2020120758 W CN2020120758 W CN 2020120758W WO 2021135500 A1 WO2021135500 A1 WO 2021135500A1
Authority
WO
WIPO (PCT)
Prior art keywords
car damage
image
car
area
damage
Prior art date
Application number
PCT/CN2020/120758
Other languages
English (en)
French (fr)
Inventor
康甲
刘莉红
刘玉宇
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021135500A1 publication Critical patent/WO2021135500A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • This application relates to the field of artificial intelligence classification models, and in particular to a vehicle damage detection model training, vehicle damage detection methods, devices, computer equipment, and storage media.
  • insurance companies generally manually identify the images taken by the owner or business personnel of the vehicle damage after the traffic accident , That is, to manually identify and determine the damage type and damaged area of the damaged part of the vehicle in the image.
  • the artificially recognized damage type and damaged area may not match; for example: Because it is difficult to distinguish between dents and scratches through visual images, damage assessment personnel can easily determine the type of damage caused by the dent as the type of scratch damage.
  • the miscalculation caused by the above conditions will greatly reduce the accuracy of the damage assessment; While it may cause cost losses for the insurance company, it will also reduce the satisfaction of car owners or customers; in addition, the manual loss determination workload is huge and the loss determination efficiency is low. When a certain loss determination accuracy needs to be met, Will further increase the workload and reduce work efficiency.
  • This application provides a vehicle damage detection model training, vehicle damage detection method, device, computer equipment and storage medium, which realizes the architecture by introducing car damage conversion images and based on the InceptionV4 model, and adopts the GIOU method, soft-NMS algorithm and GIOU Loss algorithm training can reduce the number of sample collections and improve the accuracy and reliability of recognition, reduce costs, and improve training efficiency.
  • a vehicle damage detection model training method including:
  • the car damage sample set includes a car damage sample image
  • the car damage sample image includes a car damage original image and a car damage conversion image
  • one car damage sample image is associated with a car damage label group
  • the car damage label group includes a car damage label type and a rectangular area
  • the car damage conversion image is the car damage original image after random value accumulation and conversion through an image preprocessing model
  • the car damage sample image is input into a car damage detection model containing initial parameters, the car damage texture feature in the car damage sample image is extracted through the car damage detection model, and the car damage detection model is obtained according to the extracted A prediction result of at least one output of the car damage texture feature;
  • the car damage detection model is a deep convolutional neural network model based on the InceptionV4 model architecture;
  • the recognition result obtained by screening all the prediction results by the car damage detection model;
  • the recognition result includes the sample car damage type and the sample recognition area;
  • the GIOU loss algorithm is used to determine the first loss value according to the rectangular area and the sample recognition area, and the second loss is determined according to the car damage label type and the sample car damage type through the multi-class cross entropy method value;
  • a vehicle damage detection method including:
  • the car damage image is input into the car damage detection model trained by the above car damage detection model training method, the car damage texture feature is extracted from the car damage detection model, and the car damage detection model is obtained according to the car damage texture feature
  • the final result of the output; the final result includes a car damage type and a car damage area, and the final result characterizes the car damage type and the car damage area of all damage locations in the car damage image.
  • a vehicle damage detection model training device including:
  • the acquisition module is used to acquire a car damage sample set;
  • the car damage sample set includes car damage sample images, the car damage sample images include car damage original images and car damage conversion images, one car damage sample image and one car Correlation of the damage label group;
  • the car damage label group includes a car damage label type and a rectangular area;
  • the car damage conversion image is the car damage original image through the image preprocessing model after random value accumulation and conversion;
  • the input module is used to input the car damage sample image into a car damage detection model containing initial parameters, and extract the car damage texture features in the car damage sample image through the car damage detection model to obtain the car damage detection model
  • the prediction result of at least one output according to the extracted car damage texture feature is a deep convolutional neural network model based on the InceptionV4 model architecture;
  • the recognition module is used to obtain the recognition result obtained by screening all the prediction results by the car damage detection model through the GIOU method and the soft-NMS algorithm; the recognition result includes the sample car damage type and the sample recognition area;
  • the determining module is used to determine the first loss value according to the GIOU loss algorithm according to the rectangular area and the sample recognition area, and at the same time according to the car damage label type and the sample car damage type through the multi-class cross entropy method Determine the second loss value;
  • a loss module configured to determine a total loss value according to the first loss value and the second loss value
  • the iterative module is used to iteratively update the initial parameters of the car damage detection model when the total loss value does not reach the preset convergence condition, until the total loss value reaches the preset convergence condition, it will converge
  • the subsequent vehicle damage detection model is recorded as a trained vehicle damage detection model.
  • a vehicle damage detection device including:
  • the receiving module is used to receive the car damage detection instruction and obtain the car damage image
  • the detection module is used to input the car damage image into the car damage detection model trained by the above car damage detection model training method, extract the car damage texture feature through the car damage detection model, and obtain the car damage detection model according to the The final result of the car damage texture feature output; the final result includes the car damage type and the car damage area, and the final result represents the car damage type and the car damage area of all damage locations in the car damage image.
  • a computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, and the processor implements the following steps when the processor executes the computer-readable instructions:
  • the car damage sample set includes a car damage sample image
  • the car damage sample image includes a car damage original image and a car damage conversion image
  • one car damage sample image is associated with a car damage label group
  • the car damage label group includes a car damage label type and a rectangular area
  • the car damage conversion image is the car damage original image after random value accumulation and conversion through an image preprocessing model
  • the car damage sample image is input into a car damage detection model containing initial parameters, the car damage texture feature in the car damage sample image is extracted through the car damage detection model, and the car damage detection model is obtained according to the extracted A prediction result of at least one output of the car damage texture feature;
  • the car damage detection model is a deep convolutional neural network model based on the InceptionV4 model architecture;
  • the recognition result obtained by screening all the prediction results by the car damage detection model;
  • the recognition result includes the sample car damage type and the sample recognition area;
  • the GIOU loss algorithm is used to determine the first loss value according to the rectangular area and the sample recognition area, and the second loss is determined according to the car damage label type and the sample car damage type through the multi-class cross entropy method value;
  • a computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, and the processor further implements the following steps when the processor executes the computer-readable instructions:
  • the car damage image into the car damage detection model trained by the car damage detection model training method, extract the car damage texture feature through the car damage detection model, obtain the car damage detection model and output according to the car damage texture feature
  • the final result includes a car damage type and a car damage area, and the final result characterizes the car damage type and the car damage area of all damage locations in the car damage image.
  • One or more readable storage media storing computer readable instructions, when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
  • the car damage sample set includes a car damage sample image
  • the car damage sample image includes a car damage original image and a car damage conversion image
  • one car damage sample image is associated with a car damage label group
  • the car damage label group includes a car damage label type and a rectangular area
  • the car damage conversion image is the car damage original image after random value accumulation and conversion through an image preprocessing model
  • the car damage sample image is input into a car damage detection model containing initial parameters, the car damage texture feature in the car damage sample image is extracted through the car damage detection model, and the car damage detection model is obtained according to the extracted A prediction result of at least one output of the car damage texture feature;
  • the car damage detection model is a deep convolutional neural network model based on the InceptionV4 model architecture;
  • the recognition result obtained by screening all the prediction results by the car damage detection model;
  • the recognition result includes the sample car damage type and the sample recognition area;
  • the GIOU loss algorithm is used to determine the first loss value according to the rectangular area and the sample recognition area, and the second loss is determined according to the car damage label type and the sample car damage type through the multi-class cross entropy method value;
  • One or more readable storage media storing computer readable instructions, when the computer readable instructions are executed by one or more processors, the one or more processors further execute the following steps:
  • the car damage image into the car damage detection model trained by the car damage detection model training method, extract the car damage texture feature through the car damage detection model, obtain the car damage detection model and output according to the car damage texture feature
  • the final result includes a car damage type and a car damage area, and the final result characterizes the car damage type and the car damage area of all damage locations in the car damage image.
  • the car damage detection model training method, device, computer equipment, and storage medium provided in this application are trained by inputting the car damage detection model through a car damage sample set containing car damage sample images, the car damage sample images including car damage original images and Car damage conversion image, the car damage conversion image is the original car damage image, which is obtained by accumulating random values through the image preprocessing model and converting; the car damage sample image car is damaged by the car damage detection model based on the InceptionV4 model architecture
  • the texture feature is extracted to obtain at least one prediction result; the GIOU method and the soft-NMS algorithm are used to obtain the recognition result; the GIOU loss algorithm is used to determine the first loss value according to the rectangular area and the sample recognition area, and pass
  • the multi-class cross-entropy method determines the second loss value according to the car damage label type and the sample car damage type; determines the total loss value according to the first loss value and the second loss value; in the total loss When the value reaches the preset convergence condition, the vehicle damage detection model after convergence is recorded as the
  • this application provides a vehicle damage detection model training method, which converts the image by introducing the vehicle damage And based on the InceptionV4 model architecture, and training through the GIOU method, soft-NMS algorithm and GIOU loss algorithm can reduce the number of sample collections and improve the accuracy and reliability of recognition, and achieve accurate and rapid identification of the included damage location
  • the car damage type and car damage area in the image of the car improve the recognition accuracy, reduce the cost, and improve the training efficiency.
  • the car damage detection method, device, computer equipment, and storage medium provided in this application acquire a car damage image, input the car damage image into the above-mentioned trained car damage detection model, and extract the car damage texture from the car damage detection model Feature, obtaining the final result output by the car damage detection model according to the car damage texture feature and including the car damage type and the car damage area; the final result represents the car damage position of all cars in the car damage image
  • this application improves the recognition speed, thereby improving the accuracy and reliability of determining the type of damage and the area of the damage, improving the efficiency of determining the damage, reducing the cost, and improving customer satisfaction. degree.
  • FIG. 1 is a schematic diagram of an application environment of a vehicle damage detection model training method or a vehicle damage detection method in an embodiment of the present application;
  • FIG. 2 is a flowchart of a method for training a car damage detection model in an embodiment of the present application
  • FIG. 3 is a flowchart of a vehicle damage detection model training method in another embodiment of the present application.
  • step S10 is a flowchart of step S10 of the method for training a car damage detection model in an embodiment of the present application
  • FIG. 5 is a flowchart of step S30 of the method for training a car damage detection model in an embodiment of the present application
  • FIG. 6 is a flowchart of step S40 of the vehicle damage detection model training method in an embodiment of the present application.
  • FIG. 7 is a flowchart of a vehicle damage detection method in an embodiment of the present application.
  • FIG. 8 is a functional block diagram of a vehicle damage detection model training device in an embodiment of the present application.
  • Fig. 9 is a schematic block diagram of a vehicle damage detection device in an embodiment of the present application.
  • Fig. 10 is a schematic diagram of a computer device in an embodiment of the present application.
  • the vehicle damage detection model training method provided in this application can be applied in the application environment as shown in Fig. 1, in which the client (computer equipment) communicates with the server through the network.
  • the client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a recognition model training method is provided, and the technical solution mainly includes the following steps S10-S60:
  • the car damage sample set includes a car damage sample image
  • the car damage sample image includes a car damage original image and a car damage conversion image, one car damage sample image and a car damage label group Correlation
  • the car damage label group includes a car damage label type and a rectangular area
  • the car damage conversion image is the car damage original image through the image preprocessing model after random value accumulation and conversion.
  • the car damage sample set includes a plurality of the car damage sample images
  • the car damage sample image is an image of the vehicle containing the damage location
  • the car damage sample image contains the car damage original image
  • the car damage conversion image refers to a captured image that contains the damage location and has not undergone image processing
  • the car damage conversion image is obtained by performing image preprocessing model processing on the car damage original image
  • the car damage label group includes car damage label types and rectangular areas, that is, one car damage label type corresponds to one or more rectangular areas corresponding to the car damage label type, and the car damage label types include scratches, scratches, There are 7 types of damage such as depressions, wrinkles, dead folds, tears, and missing.
  • the rectangular area is the coordinate area that can cover the damage location through a rectangular frame with a minimum area
  • the car damage conversion image is the original car damage
  • the image is obtained by accumulating random values through the image preprocessing model and converting, that is, through the image preprocessing model, the pixel value of each pixel in the original car damage image is accumulated by random values and then converted to obtain
  • the conversion mode in the image preprocessing model can be set according to requirements, for example, the conversion mode in the image preprocessing model can be through the red-green-blue (RGB) color space Model conversion, or through the hexagonal cone (HSV) color space model conversion, or through the color video (YUV) color space model conversion, and so on.
  • the car damage conversion image is the car damage original image after random value accumulation and conversion through an image preprocessing model, including:
  • the car damage original image under the path is obtained, and the car damage original image It is one of the car damage sample sets, the car damage original image refers to a captured image that contains the damage location and has not undergone image processing, and the car damage original image is associated with one of the car damage tag groups.
  • S102 Separate the original car damage image through the image preprocessing model, and separate the red channel image of the red channel, the green channel image of the green channel, and the blue channel image of the blue channel.
  • the car damage original image includes three channel (red channel, green channel, and blue channel) images, that is, each pixel in the cropped image has three channel component values, which are respectively red
  • the component value, the green component value and the blue component value are separated from the original car damage image by the image preprocessing model to obtain the red channel image, the green channel image and the blue channel image.
  • S103 Perform random value accumulation processing on the red channel image through the image preprocessing model to obtain a red processing channel image, and perform random value accumulation processing on the green channel image at the same time to obtain a green processing channel image, and perform random value accumulation processing on the blue channel image.
  • the color channel image is processed by accumulating random values to obtain the blue processing channel image.
  • each pixel value in the red channel image is accumulated by a random value through the image preprocessing model, and the accumulated red channel image is determined as the red processing channel image; the green channel is determined by the image preprocessing model Each pixel value in the image is accumulated with a random value, and the accumulated green channel image is determined as the green processing channel image; through the image preprocessing model, each pixel value in the blue channel image is accumulated by a random value, and the The accumulated blue channel image is determined as the blue processing channel image.
  • the random value may be generated by a random module in the image preprocessing model, or one of the values may be randomly selected from a preset value range through the image preprocessing model, and the random module may be a pseudo A random number generator, the algorithm used in the random module utilizes the strong collision and one-way nature of the one-way hash function to make the pseudo-random number generator unpredictable.
  • hexagonal pyramid color space model is also called HSV model (Hue Saturation Value model), a model that is converted according to the intuitive characteristics of the color (hue, saturation, and lightness), and the red processing channel image, The green processing channel image and the blue processing channel image are input to the hexagonal pyramid color space model.
  • HSV model Human Saturation Value model
  • S105 Convert and merge the red processing channel image, the green processing channel image, and the blue processing channel image through the hexagonal pyramid color space model to obtain the car damage conversion image; wherein, the The car loss conversion image includes the hue channel image of the hue channel, the saturation channel image of the saturation channel, and the lightness channel image of the lightness channel.
  • each pixel in the red processing channel image, each pixel in the green processing channel image, and each pixel in the blue processing channel image are combined through the hexagonal pyramid color space model.
  • One-to-one conversion of each pixel point to obtain the hue (H) component value, saturation (S) component value and lightness (V) component value corresponding to each pixel point, and the hue component value of each pixel point is based on the pixel point
  • the corresponding positions are summarized to obtain the hue channel image
  • the saturation component value of each pixel is summarized according to the corresponding position of the pixel to obtain the saturation channel image
  • the brightness component value of each pixel is calculated according to the corresponding position of the pixel
  • the lightness channel image is obtained by summarizing, and the hue channel image, the saturation channel image and the lightness channel image are combined to obtain the car loss conversion image.
  • the image of the three channels (the hue channel image , The saturation channel image and the lightness channel image) are integrated into the car loss conversion image of one channel.
  • S106 Determine the car damage label group associated with the car damage original image as the car damage label group associated with the car damage conversion image.
  • This application realizes the channel splitting of the original car damage image through the image preprocessing model and the random value accumulation processing for each channel, and then the car damage is obtained through the conversion of the hexagonal cone color model (HSV model) in the image preprocessing model Converting the image, and inputting the car damage conversion image into the car damage detection model for training can prevent the car damage detection model from overfitting, improve the generalization ability of the car damage sample set, and improve the car damage detection The accuracy and reliability of the model.
  • HSV model hexagonal cone color model
  • the car damage detection model is a deep convolutional neural network model based on the InceptionV4 model that recognizes the sample car damage type and the sample recognition area in the car damage sample image, that is, the network structure of the car damage detection model is The network structure of the InceptionV4 model is the same.
  • the initial parameters of the car damage detection model can be set according to requirements, or all the parameters of the InceptionV4 model can be obtained through a migration learning method.
  • the car damage texture features are stripes, For features related to ripples, the car damage detection model predicts based on the car damage texture feature in the extracted car damage sample image to obtain the prediction result.
  • the prediction result includes a prediction type and a prediction area
  • this application can simplify the network structure of the vehicle damage detection model and improve the efficiency of the vehicle damage detection model by migrating the InceptionV4 model, thereby achieving the effect of rapid recognition.
  • the prediction type is the type predicted by the vehicle damage detection model, and the prediction type includes 7 damage types including scratches, scratches, dents, folds, dead folds, tears, and missing. Is a predicted rectangular area corresponding to the prediction type, the confidence is the probability that the vehicle damage detection model predicts the prediction result and the predicted area, and the confidence indicates the vehicle damage The predictive ability of the detection model.
  • the sample car damage types include 7 damage types such as scratches, scratches, dents, folds, deadfolds, tears, and missing.
  • the method before the step S20, that is, before the car damage sample image is input into the car damage detection model containing the initial parameters, the method includes:
  • the trained InceptionV4 model selects a vehicle-related detection model according to requirements, for example: the trained InceptionV4 model is an InceptionV4 model applied to vehicle lamp brightness detection, or the trained InceptionV4 model It is the InceptionV4 model used in vehicle model detection and so on.
  • the InceptionV4 model completed through migration learning training can quickly build the model and reduce the time for training the car damage detection model and reduce the cost.
  • the GIOU method is to first obtain the minimum enclosed area of the two rectangular boxes (that is, the rectangular area of the smallest box that contains the two rectangular boxes at the same time), and then obtain the intersection ratio of the two rectangular boxes, and then Obtain the proportion of the smallest closed containment area that does not belong to the two boxes in the smallest closed containment area (also called non-area proportion), and finally use the intersection ratio of the two rectangular boxes and the proportion of the non-area
  • a and B are two rectangular boxes
  • C is the smallest closed containment area
  • X is the intersection ratio of the two rectangular boxes (that is, the IOU value in the full text)
  • Y is the GIOU value of the two rectangular boxes.
  • all the prediction regions in the prediction results are calculated to obtain the prediction value of the GIOU between the prediction regions, and then the confidence threshold is determined by the soft-NMS algorithm, and the confidence threshold is determined according to the confidence threshold.
  • the prediction results are screened to obtain the recognition results.
  • the soft-NMS algorithm calculates all the prediction results in a Gaussian weighting manner to obtain the confidence thresholds corresponding to all the prediction results.
  • the recognition result includes a sample car damage type and a sample recognition area.
  • the sample car damage type includes 7 damage types including scratches, scratches, dents, folds, dead folds, tears, and missing.
  • the area is a rectangular area in the prediction result corresponding to the confidence threshold in all the prediction results.
  • the step S30 that is, obtaining the recognition result obtained by screening the prediction result by the car damage detection model through the GIOU method and the soft-NMS algorithm, includes:
  • the prediction result includes a prediction type, a prediction area, and a confidence level. degree.
  • the prediction result includes the prediction type, the prediction area, and the confidence level, wherein there is a relationship between the prediction type, the prediction area, and the confidence level.
  • the prediction result is ⁇ "scratch", (10, 20), (10, 60), (50, 20), (50, 60), "95.5%” ⁇ , where the prediction type is “scrape” "Wipe", the prediction area is a rectangular area enclosed by (10, 20), (10, 60), (50, 20), (50, 60), and the confidence level is "95.5%”.
  • S302 Determine a GIOU prediction value corresponding to each prediction region according to all the prediction regions, all the prediction types, and all the confidence levels by using the GIOU method.
  • the GIOU prediction value between the prediction regions is calculated by the GIOU method, that is, the GIOU method is calculated by the GIOU method for one prediction region and any other prediction region.
  • the predicted value of the GIOU corresponding to the same prediction area is taken to the maximum value, and the predicted value of the GIOU is in the range of -1 to 1, wherein the predicted GIOU
  • the value is close to -1, it indicates that the two regions are far away, indicating that the accuracy of this region is low.
  • the GIOU prediction value is close to 1, it indicates that the two regions are close to overlap, indicating that the accuracy of this region is high.
  • the soft-NMS algorithm calculates all the prediction results in a Gaussian weighting manner to obtain the confidence thresholds corresponding to all the prediction results, and the soft-NMS algorithm sets the adjacent prediction prediction regions of the overlapping part A Gaussian attenuation function is used to determine the appropriate confidence threshold.
  • the Soft-NMS algorithm has significantly improved the average accuracy of the existing object detection algorithm in multiple overlapping object detection.
  • the soft-NMS algorithm can determine An appropriate confidence threshold can avoid rudely deleting some large GIOU prediction values.
  • S304 Obtain all the prediction results corresponding to the confidence level greater than the confidence threshold, and determine all the prediction results corresponding to the confidence level greater than the confidence threshold as the recognition result.
  • the prediction result corresponding to the confidence level greater than the confidence threshold is marked as the recognition result.
  • This application realizes that through the GIOU method and the soft-NMS algorithm, all the prediction results are screened to obtain the recognition results, which can reasonably remove the repeated and low-confidence prediction results, so that the accuracy of the car damage detection model is higher, and the accuracy is improved. Reliability of recognition.
  • the first loss value is calculated
  • the multi-class cross-entropy method is a method for probabilistic prediction of multiple car damage label types through a cross-entropy algorithm, and the cross-entropy algorithm is used to calculate the car damage label
  • the type and the sample car loss type are input into the cross entropy function in the cross entropy algorithm, and the second loss value is calculated.
  • the first loss value indicates the difference between the rectangular area and the sample identification area
  • the second loss value indicates the difference between the car damage label type and the sample car damage type .
  • step S40 that is, the determining the first loss value according to the rectangular area and the sample recognition area by the GIOU method, includes:
  • S401 Acquire the rectangular area and the sample identification area.
  • the rectangular area is a coordinate area range that can cover the damage location by a rectangular frame with a minimum area
  • the sample identification area is a rectangular area in the prediction results corresponding to the confidence threshold in all the prediction results.
  • S402 Calculate the IOU value between the sample identification area and the rectangular area through the IOU algorithm.
  • the IOU algorithm is the ratio of the intersection and union of the area of the rectangular area and the area of the sample identification area
  • the function formula of the IOU algorithm is Where I is the IOU value of the sample identification area from the rectangular area, E is the area of the rectangular area, F is the area of the sample identification area,
  • S403 Determine a minimum coverage area according to the rectangular area and the sample identification area.
  • each coordinate point is obtained.
  • One of the coordinate points includes an abscissa value and an ordinate value, from all the coordinate points Extract the maximum value of the abscissa and the minimum value of the abscissa from all the coordinate points, extract the maximum value of the ordinate and the minimum value of the ordinate from all the coordinate points, and set the maximum value of the abscissa .
  • the minimum value of the abscissa, the maximum value of the ordinate, and the minimum value of the ordinate are combined to determine four coordinate points in the rectangular coordinates of the minimum coverage area, for example, the rectangular coordinates of the rectangular area are ( 10, 20), (10, 60), (50, 20), (50, 60); the rectangular coordinates of the sample recognition area are (35, 15), (35, 40), (80, 15), (80 , 40); then the maximum value of the abscissa is 80, the minimum value of the abscissa is
  • S404 Determine an unoccupied area according to the minimum coverage area, the rectangular area, and the sample identification area.
  • the area remaining after removing the rectangular area and the sample identification area from the minimum coverage area is the unoccupied area.
  • S405 Obtain a ratio of the unoccupied area to the minimum coverage area, and determine the ratio of the unoccupied area to the minimum coverage area as an unoccupied ratio.
  • obtaining the area of the unoccupied area that is, calculating the area of the unoccupied area by the rectangular coordinates of the unoccupied area
  • obtaining the area of the minimum coverage area that is, passing the minimum coverage area
  • Calculate the area of the minimum coverage area with the rectangular coordinates of so as to obtain the ratio of the unoccupied area to the minimum coverage area, that is, the ratio of the area of the occupied area to the area of the minimum coverage area.
  • the ratio is marked as the non-occupancy ratio.
  • S406 Using the GIOU loss algorithm, calculate the first loss value corresponding to the sample identification area according to the non-occupancy ratio and the IOU value of the sample identification area from the rectangular area.
  • This application realizes the calculation of the first loss value through the GIOU loss algorithm, provides the direction of the regression loss, and allows the car damage detection model to recognize in a better recognition direction, so that the sample recognition area is closer to the rectangular area, which improves Recognition accuracy rate, and reduced training time.
  • S50 Determine a total loss value according to the first loss value and the second loss value.
  • the loss value can be obtained by a weighted average method of the first loss value and the second loss value, and the first loss value and the second loss value are input into a preset loss model,
  • the total loss value is calculated by the total loss function in the loss model; the total loss function is:
  • M1 is the first loss value
  • M2 is the second loss value
  • w 1 is the weight of the first loss value
  • w 2 is the weight of the second loss value.
  • the convergence condition may be a condition that the value of the total loss value is small and will not drop after 10,000 calculations, that is, the value of the total loss value is small and will not decrease after 10,000 calculations. When it will no longer drop, stop training, and record the vehicle damage detection model after convergence as the completed vehicle damage detection model; the convergence condition can also be a condition that the total loss value is less than a set threshold, that is, When the total loss value is less than the set threshold, the training is stopped, and the vehicle damage detection model after convergence is recorded as the vehicle damage detection model completed by the training.
  • the initial parameters of the iterative vehicle damage detection model can be continuously updated to continuously move closer to the accurate result, so that the recognition accuracy becomes higher and higher.
  • the method further includes:
  • the damage detection model has converged, and the vehicle damage detection after convergence will be performed.
  • the model is recorded as a trained car damage detection model. In this way, according to the car damage sample images in the car damage sample set, the trained car damage detection model is obtained through continuous training, which can improve the accuracy and reliability of recognition.
  • This application uses a car damage sample set containing car damage sample images to input a car damage detection model for training.
  • the car damage sample image includes a car damage original image and a car damage conversion image, and the car damage conversion image is the car damage original
  • the image is obtained by accumulating random values through the image preprocessing model and converting it; using the car damage detection model based on the InceptionV4 model architecture to extract the damage texture features of the damaged sample image car to obtain at least one prediction result; using the GIOU method and soft -NMS algorithm to obtain the recognition result; through the GIOU loss algorithm, the first loss value is determined according to the rectangular area and the sample recognition area, and at the same time through the multi-class cross-entropy method, according to the car damage label type and the sample
  • the vehicle damage type determines the second loss value; the total loss value is determined according to the first loss value and the second loss value; when the total loss value does not reach the preset convergence condition, the vehicle damage detection is updated iteratively The initial parameters of the model until the total
  • the vehicle damage detection method provided in this application can be applied in the application environment as shown in Fig. 1, in which the client (computer equipment) communicates with the server through the network.
  • the client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a vehicle damage detection method is provided, and the technical solution mainly includes the following steps S100-S200:
  • the vehicle will leave traces of damage.
  • the staff of the insurance company will take photos related to the traffic accident. These photos include photos of the vehicle damage.
  • the staff upload the photos of the vehicle damage to the server.
  • To trigger the vehicle damage detection instruction to obtain the vehicle damage image contained in the vehicle damage detection instruction, where the vehicle damage image is a photograph of the vehicle damage taken.
  • S200 Input the car damage image into the car damage detection model that has been trained, extract car damage texture features through the car damage detection model, and obtain a final result output by the car damage detection model according to the car damage texture feature;
  • the final result includes a car damage type and a car damage area, and the final result represents the car damage type and car damage area of all damage locations in the car damage image.
  • the final result can be obtained by only inputting the car damage image into the trained car damage detection model, and extracting the texture feature of the car damage through the search detection model, which speeds up the recognition speed. , Thereby improving the efficiency of recognition.
  • This application acquires a car damage image, inputs the car damage image into the above-mentioned trained car damage detection model, extracts car damage texture features through the car damage detection model, and obtains the car damage detection model according to the car damage texture
  • the feature output includes the final result of the car damage type and the car damage area; the final result represents the car damage type and the car damage area of all car damage locations in the car damage image, thus improving the recognition speed, thereby Improved identification efficiency, reduced costs, and improved customer satisfaction.
  • a vehicle damage detection model training device is provided, and the vehicle damage detection model training device corresponds to the vehicle damage detection model training method in the above-mentioned embodiment in a one-to-one correspondence.
  • the vehicle damage detection model training device includes an acquisition module 11, an input module 12, an identification module 13, a determination module 14, a loss module 15 and an iteration module 16.
  • the detailed description of each functional module is as follows:
  • the obtaining module 11 is used to obtain a car damage sample set;
  • the car damage sample set includes car damage sample images, the car damage sample images include car damage original images and car damage conversion images, one of the car damage sample images and one Car damage label group association;
  • the car damage label group includes a car damage label type and a rectangular area;
  • the car damage conversion image is the car damage original image through the image preprocessing model after random value accumulation and conversion;
  • the input module 12 is configured to input the car damage sample image into a car damage detection model containing initial parameters, and extract the car damage texture features in the car damage sample image through the car damage detection model to obtain the car damage detection
  • the model outputs at least one prediction result according to the extracted texture feature of the car damage
  • the car damage detection model is a deep convolutional neural network model based on the InceptionV4 model architecture
  • the recognition module 13 is used to obtain the recognition result obtained by screening all the prediction results by the car damage detection model through the GIOU method and the soft-NMS algorithm; the recognition result includes the sample car damage type and the sample recognition area;
  • the determination module 14 is used to determine the first loss value according to the GIOU loss algorithm according to the rectangular area and the sample recognition area, and at the same time, according to the type of the car damage label and the sample car damage by the multi-class cross entropy method.
  • the type determines the second loss value
  • the loss module 15 is configured to determine a total loss value according to the first loss value and the second loss value;
  • the iterative module 16 is configured to iteratively update the initial parameters of the car damage detection model when the total loss value does not reach the preset convergence condition, until the total loss value reaches the preset convergence condition, The vehicle damage detection model after convergence is recorded as a trained vehicle damage detection model.
  • the loss module 15 includes:
  • the convergence module is configured to record the vehicle damage detection model after convergence as a trained vehicle damage detection model when the total loss value reaches a preset convergence condition.
  • the acquisition module 11 includes:
  • a first obtaining unit configured to obtain the car damage original image and the car damage label set associated with the car damage original image
  • a separation unit configured to separate the original car damage image through an image preprocessing model, and separate the red channel image of the red channel, the green channel image of the green channel, and the blue channel image of the blue channel;
  • the processing unit is configured to perform random value accumulation processing on the red channel image through the image preprocessing model to obtain a red processing channel image, and at the same time perform random value accumulation processing on the green channel image to obtain a green processing channel image, and
  • the blue channel image is subjected to random value accumulation processing to obtain a blue processing channel image;
  • An input unit configured to input the red processing channel image, the green processing channel image, and the blue processing channel image into a hexagonal pyramid color space model in the image preprocessing model;
  • the conversion unit is configured to convert the red processing channel image, the green processing channel image, and the blue processing channel image through the hexagonal pyramid color space model to obtain the car damage conversion image;
  • the car loss conversion image includes a hue channel image of a hue channel, a saturation channel image of a saturation channel, and a lightness channel image of a lightness channel;
  • the first determining unit is configured to determine the car damage label group associated with the car damage original image as the car damage label group associated with the car damage conversion image.
  • the identification module 13 includes:
  • the second obtaining unit is configured to obtain the prediction area, the prediction type corresponding to the prediction area, and the confidence level corresponding to the prediction area in each prediction result; the prediction result includes the prediction type , Prediction area and confidence level;
  • a calculation unit configured to determine the GIOU prediction value corresponding to each prediction region according to all the prediction regions, all the prediction types, and all the confidence levels by using the GIOU method;
  • the second determining unit is used to determine the confidence threshold according to all the predicted GIOU values through the soft-NMS algorithm
  • the screening unit is configured to obtain all the prediction results corresponding to the confidence level greater than the confidence threshold, and determine all the prediction results corresponding to the confidence level greater than the confidence threshold as the recognition result.
  • the calculation unit includes:
  • the calculation subunit is used to calculate the IOU value between the sample identification area and the rectangular area through the IOU algorithm
  • a determining subunit configured to determine a minimum coverage area according to the rectangular area and the sample identification area
  • An identification subunit configured to determine an unoccupied area according to the minimum coverage area, the rectangular area, and the sample identification area;
  • the non-occupied subunit is configured to obtain the ratio of the unoccupied area to the minimum coverage area, and determine the ratio of the unoccupied area to the minimum coverage area as the non-occupied ratio;
  • the output subunit is configured to calculate the first loss value corresponding to the sample identification area according to the non-occupancy ratio and the IOU value of the sample identification area from the rectangular area through the GIOU loss algorithm.
  • the various modules in the vehicle damage detection model training device can be implemented in whole or in part by software, hardware, and combinations thereof.
  • the foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
  • a vehicle damage detection device is provided, and the vehicle damage detection device corresponds to the vehicle damage detection method in the above-mentioned embodiment in a one-to-one correspondence.
  • the vehicle damage detection device includes an acquisition module 101 and a detection module 102.
  • the detailed description of each functional module is as follows:
  • the receiving module 101 is configured to receive a car damage detection instruction and obtain a car damage image
  • the detection module 102 is configured to input the car damage image into the car damage detection model trained by the above car damage detection model training method, extract the car damage texture feature through the car damage detection model, and obtain the car damage detection model according to The final result of the car damage texture feature output; the final result includes a car damage type and a car damage area, and the final result represents the car damage type and car damage area of all damage locations in the car damage image.
  • the various modules in the vehicle damage detection device described above can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 10.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a readable storage medium and an internal memory.
  • the readable storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer readable instructions in the readable storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer readable instruction is executed by the processor to realize a vehicle damage detection model training method or a vehicle damage detection method.
  • the readable storage medium provided in this embodiment includes a non-volatile readable storage medium and a volatile readable storage medium.
  • a computer device including a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor.
  • the processor executes the computer-readable instructions, the vehicle in the foregoing embodiment is implemented.
  • the damage detection model training method or when the processor executes computer-readable instructions, implements the vehicle damage detection method in the foregoing embodiment.
  • one or more readable storage media storing computer readable instructions are provided.
  • the readable storage media provided in this embodiment include non-volatile readable storage media and volatile readable storage. Medium; the readable storage medium stores computer readable instructions, and when the computer readable instructions are executed by one or more processors, the one or more processors implement the vehicle damage detection model training method in the foregoing embodiment, or When the computer program is executed by the processor, the vehicle damage detection method in the foregoing embodiment is implemented.
  • a person of ordinary skill in the art can understand that all or part of the processes in the methods of the above-mentioned embodiments can be implemented by instructing relevant hardware through computer-readable instructions.
  • the computer-readable instructions can be stored in a non-volatile computer.
  • a readable storage medium or a volatile readable storage medium when the computer readable instruction is executed, it may include the processes of the above-mentioned method embodiments.
  • any reference to memory, storage, database, or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Finance (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Accounting & Taxation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

一种车损检测模型训练、车损检测方法、装置、设备及介质,涉及人工智能的分类模型领域。该方法包括:通过包含车损样本图像的车损样本集输入车损检测模型进行训练,通过基于InceptionV4模型架构的车损检测模型提取损纹理特征,获取至少一个的预测结果;通过GIOU方法和soft-NMS算法,获取识别结果;通过GIOU损失算法,确定出第一损失值,同时通过多分类交叉熵方法,确定出第二损失值;根据第一损失值和第二损失值,确定总损失值;在总损失值未达到预设的收敛条件时,迭代更新车损检测模型的初始参数,直至总损失值达到预设的收敛条件时,将收敛之后的车损检测模型记录为训练完成的车损检测模型。上述方法实现快速识别车损类型和车损区域。

Description

车损检测模型训练、车损检测方法、装置、设备及介质
本申请要求于2020年6月8日提交中国专利局、申请号为202010513050.5,发明名称为“车损检测模型训练、车损检测方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能的分类模型领域,尤其涉及一种车损检测模型训练、车损检测方法、装置、计算机设备及存储介质。
背景技术
发明人发现在车辆发生交通事故后,车辆的某些部位会留下破损、刮伤等损伤的痕迹,目前,保险公司一般是人工识别由车主或业务人员拍摄的交通事故之后的车辆损伤的图像,即对图像中车辆的损伤部位的损伤类型及损伤区域进行人工识别并判定,如此,可能由于存在标准理解不一、观察经验不足等影响,导致人工识别的损伤类型及损伤区域不符;例如:由于凹陷和刮擦难以通过目测图像加以分辨,定损人员很容易就将凹陷的损伤类型确定为刮擦的损伤类型,上述情况下导致的定损失误,会大大降低了定损的准确性;在可能会导致保险公司的成本损失的同时,也会降低车主或客户的满意度;此外,人工定损的工作量巨大,定损效率低下,在需要满足一定的定损准确度的情况下,会进一步提升工作量,降低工作效率。
发明内容
本申请提供一种车损检测模型训练、车损检测方法、装置、计算机设备及存储介质,实现了通过引入车损转换图像和基于InceptionV4模型进行架构,并且通过GIOU方法、soft-NMS算法和GIOU损失算法进行训练能够减少样本收集数量及提升了识别准确性和可靠性,减少了成本,提高了训练效率。
一种车损检测模型训练方法,包括:
获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;
将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型;
通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域;
通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;
根据所述第一损失值和所述第二损失值,确定总损失值;
在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
一种车损检测方法,包括:
接收到车损检测指令,获取车损图像;
将所述车损图像输入如上述车损检测模型训练方法训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
一种车损检测模型训练装置,包括:
获取模块,用于获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;
输入模块,用于将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型;
识别模块,用于通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域;
确定模块,用于通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;
损失模块,用于根据所述第一损失值和所述第二损失值,确定总损失值;
迭代模块,用于在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
一种车损检测装置,包括:
接收模块,用于接收到车损检测指令,获取车损图像;
检测模块,用于将所述车损图像输入上述车损检测模型训练方法训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;
将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型;
通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域;
通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;
根据所述第一损失值和所述第二损失值,确定总损失值;
在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时还实现如下步骤:
接收到车损检测指令,获取车损图像;
将所述车损图像输入通过车损检测模型训练方法训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;
将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型;
通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域;
通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;
根据所述第一损失值和所述第二损失值,确定总损失值;
在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
接收到车损检测指令,获取车损图像;
将所述车损图像输入通过车损检测模型训练方法训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
本申请提供的车损检测模型训练方法、装置、计算机设备及存储介质,通过包含车损样本图像的车损样本集输入车损检测模型进行训练,所述车损样本图像包括车损原始图像和车损转换图像,所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;通过基于InceptionV4模型架构的车损检测模型对车损样本图像车进行损纹理特征的提取,获取至少一个的预测结果;通过GIOU方法和soft-NMS算法,获取识别结果;通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;根据所述第一损失值和所述第二损失值,确定总损失值;在总损失值达到预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型,因此,本申请提供了一种车损检测模型训练方法,通过引入车损转换图像和基于InceptionV4模 型进行架构,并且通过GIOU方法、soft-NMS算法和GIOU损失算法进行训练能够减少样本收集数量及提升了识别准确性和可靠性,实现了准确地、快速地识别出包含的损伤位置的图像中的车损类型和车损区域,提高了识别准确率,减少了成本,提高了训练效率。
本申请提供的车损检测方法、装置、计算机设备及存储介质,通过获取车损图像,将所述车损图像输入上述训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的包含有车损类型和车损区域的最终结果;所述最终结果表征了所述车损图像中的所有车损位置的车损类型和车损区域,如此,本申请提高了识别速度,从而提升了对定损类型和定损区域进行确定的准确率及可靠性,提高了定损效率,减少了成本,提高了客户满意度。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中车损检测模型训练方法或车损检测方法的应用环境示意图;
图2是本申请一实施例中车损检测模型训练方法的流程图;
图3是本申请另一实施例中车损检测模型训练方法的流程图;
图4是本申请一实施例中车损检测模型训练方法的步骤S10的流程图;
图5是本申请一实施例中车损检测模型训练方法的步骤S30的流程图;
图6是本申请一实施例中车损检测模型训练方法的步骤S40的流程图;
图7是本申请一实施例中车损检测方法的流程图;
图8是本申请一实施例中车损检测模型训练装置的原理框图;
图9是本申请一实施例中车损检测装置的原理框图;
图10是本申请一实施例中计算机设备的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的车损检测模型训练方法,可应用在如图1的应用环境中,其中,客户端(计算机设备)通过网络与服务器进行通信。其中,客户端(计算机设备)包括但不限于为各种个人计算机、笔记本电脑、智能手机、平板电脑、摄像头和便携式可穿戴设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一实施例中,如图2所示,提供一种识别模型训练方法,其技术方案主要包括以下步骤S10-S60:
S10,获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得。
可理解地,所述车损样本集包含有多个所述车损样本图像,所述车损样本图像为车辆含有损伤位置的图像,所述车损样本图像中包含有所述车损原始图像和所述车损转换图像, 所述车损原始图像指拍摄的含有损伤位置且未经过图像处理的图像,所述车损转换图像为对车损原始图像进行图像预处理模型处理后转换获得,所述车损标签组包括车损标签类型和矩形区域,即一个车损标签类型对应一个或者多个与该车损标签类型对应的矩形区域,所述车损标签类型包括划痕、刮擦、凹陷、褶皱、死折、撕裂、缺失等7种损伤类型,所述矩形区域为通过一个最小面积的矩形框能覆盖损伤位置的坐标区域范围,所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得,即通过所述图像预处理模型对所述车损原始图像中的每个像素点的像素值进行随机数值累加后再进行转换处理,得到所述车损原始图像对应的所述车损转换图像,所述图像预处理模型中转换的方式可以根据需求设定,比如图像预处理模型中转换的方式可以通过红绿蓝(RGB)颜色空间模型转换,或者通过六角锥体(HSV)颜色空间模型转换,或者通过彩色视频(YUV)颜色空间模型转换等等。
在一实施例中,如图4所示,所述步骤S10之前,即所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得,包括:
S101,获取所述车损原始图像和与所述车损原始图像关联的所述车损标签组。
可理解地,在接收到车损转换图像的生成指令后,根据所述生成指令中的所述车损原始图像的路径,获取该路径下的所述车损原始图像,所述车损原始图像为所述车损样本集中的其中一个,所述车损原始图像指拍摄的含有损伤位置且未经过图像处理的图像,所述车损原始图像与一个所述车损标签组关联。
S102,通过图像预处理模型将所述车损原始图像分离,分离出红色通道的红色通道图像、绿色通道的绿色通道图像和蓝色通道的蓝色通道图像。
可理解地,所述车损原始图像包括三个通道(红色通道、绿色通道和蓝色通道)图像,即每个所述裁切图像中的每个像素点有三个通道分量值,分别为红色分量值、绿色分量值和蓝色分量值,通过所述图像预处理模型对所述车损原始图像进行分离,得到所述红色通道图像、所述绿色通道图像和所述蓝色通道图像。
S103,通过图像预处理模型,对所述红色通道图像进行随机数值累加处理,得到红色加工通道图像,同时对所述绿色通道图像进行随机数值累加处理,得到绿色加工通道图像,以及对所述蓝色通道图像进行随机数值累加处理,得到蓝色加工通道图像。
可理解地,通过图像预处理模型将所述红色通道图像中的每个像素值累加一个随机数值,将累加之后的红色通道图像确定为红色加工通道图像;通过图像预处理模型将所述绿色通道图像中的每个像素值累加一个随机数值,将累加之后的绿色通道图像确定为绿色加工通道图像;通过图像预处理模型将所述蓝色通道图像中的每个像素值累加一个随机数值,将累加之后的蓝色通道图像确定为蓝色加工通道图像。
其中,所述随机数值可以通过所述图像预处理模型中的随机模块生成,也可以通过所述图像预处理模型从预设的数值范围内随机抽取其中的一个数值,所述随机模块可以为伪随机数生成器,所述随机模块中运用的算法利用单向散列函数的强碰撞性和单向性使得伪随机数生成器拥有不可预知性。
S104,将所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像输入所述图像预处理模型中的六角锥体颜色空间模型;
可理解地,所述六角锥体颜色空间模型也称为HSV模型(Hue Saturation Value模型),根据颜色的直观特性(色调、饱和度和明度)进行转换的模型,将所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像输入至所述六角锥体颜色空间模型。
S105,通过所述六角锥体颜色空间模型对所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像进行转换及合并,得到所述车损转换图像;其中,所述车损转换图像包括色调通道的色调通道图像、饱和度通道的饱和度通道图像和明度通道的明度通道图像。
可理解地,通过所述六角锥体颜色空间模型将所述红色加工通道图像中的每个像素点、所述绿色加工通道图像中的每个像素点和所述蓝色加工通道图像中的每个像素点进行一一对应的转换得到每个像素点对应的色调(H)分量值、饱和度(S)分量值和明度(V)分量值,将各像素点的色调分量值按照该像素点对应的位置进行汇总得到色调通道图像,将各像素点的饱和度分量值按照该像素点对应的位置进行汇总得到饱和度通道图像,将各像素点的明度分量值按照该像素点对应的位置进行汇总得到明度通道图像,将所述色调通道图像、所述饱和度通道图像和所述明度通道图像进行合并,得到所述车损转换图像,所述即将三个通道的图像(所述色调通道图像、所述饱和度通道图像和所述明度通道图像)汇集成一个通道的所述车损转换图像。
S106,将所述车损原始图像关联的车损标签组确定为所述车损转换图像关联的车损标签组。
可理解地,要将所述车损原始图像用于所述车损样本集中,需要将所述车损转换图像与该所述车损转换图像对应的所述车损原始图像关联的车损标签组进行关联。
本申请实现了通过图像预处理模型对车损原始图像进行通道拆分并对每个通道进行随机数值累加处理,再通过图像预处理模型中的六角椎体颜色模型(HSV模型)转换获得车损转换图像,将所述车损转换图像输入所述车损检测模型进行训练能够防止所述车损检测模型过拟合,并且提高了所述车损样本集的泛化能力,提高了车损检测模型的准确率和可靠性。
S20,将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型。
可理解地,所述车损检测模型为识别所述车损样本图像中样本车损类型和样本识别区域的基于InceptionV4模型的深度卷积神经网络模型,即所述车损检测模型的网络结构与InceptionV4模型的网络结构相同,所述车损检测模型的所述初始参数可以根据需求设定,也可以通过迁移学习方法获取InceptionV4模型的所有参数,所述车损纹理特征为车辆损伤位置的条纹、波光纹相关的特征,所述车损检测模型根据提取出的所述车损样本图像中的所述车损纹理特征进行预测,得出所述预测结果,所述预测结果包括预测类型、预测区域和置信度,本申请通过迁移所述InceptionV4模型可以简化所述车损检测模型的网络结构和提高所述车损检测模型的效率,实现了快速识别的效果。
其中,所述预测类型为通过所述车损检测模型预测的类型,所述预测类型包括划痕、刮擦、凹陷、褶皱、死折、撕裂、缺失等7种损伤类型,所述预测区域为与所述预测类型对应的且预测的矩形的区域,所述置信度为所述车损检测模型预测出所述预测结果和所述预测区域的概率,所述置信度表明了所述车损检测模型的预测能力,所述样本车损类型包括划痕、刮擦、凹陷、褶皱、死折、撕裂、缺失等7种损伤类型。
在一实施例中,所述步骤S20之前,即将所述车损样本图像输入含有初始参数的车损检测模型之前,包括:
S201,通过迁移学习,获取训练完成的InceptionV4模型的所有参数,将所有所述参数确定为所述车损检测模型中的所述初始参数。
可理解地,所述训练完成的InceptionV4模型根据需求选择与车辆相关检测的模型,比如:所述训练完成的InceptionV4模型为应用于车辆车灯亮度检测的InceptionV4模型,或者所述训练完成的InceptionV4模型为应用于车辆车型检测的InceptionV4模型等等。
本申请通过迁移学习训练完成的InceptionV4模型,能够快速构架模型并且减少了训练车损检测模型的时间,减少了成本。
S30,通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果 进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域。
可理解地,所述GIOU方法为先获取两个矩形框的最小闭合包含区域(即同时包含了两个矩形框的最小框的矩形区域),再获取出两个矩形框的交并比,再获取该最小闭合包含区域中不属于两个框的区域占该最小闭合包含区域的占比(也可称为非区域占比),最后用两个矩形框的交并比与该非区域占比的差值获得的方法,也即
Figure PCTCN2020120758-appb-000001
其中,A和B为两个矩形框,C为最小闭合包含区域,X为两个矩形框的交并比(也即全文中的IOU值),Y为两个矩形框的GIOU值,通过所述GIOU方法,对所有所述预测结果中的预测区域进行计算得出各所述预测区域之间的GIOU预测值,再通过所述soft-NMS算法确定出置信阈值,根据该置信阈值对所有所述预测结果进行筛选,从而得出所述识别结果,所述soft-NMS算法为通过高斯加权方式对所有所述预测结果进行计算获得所有所述预测结果对应的置信阈值。
其中,所述识别结果包括样本车损类型和样本识别区域,所述样本车损类型包括划痕、刮擦、凹陷、褶皱、死折、撕裂、缺失等7种损伤类型,所述样本识别区域为在所有所述预测结果中超过置信阈值对应的预测结果中的矩形区域。
在一实施例中,如图5所示,所述步骤S30,即所述通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所述预测结果进行筛选获得的识别结果,包括:
S301,获取每个所述预测结果中的所述预测区域、与所述预测区域对应的所述预测类型和与所述预测区域对应的置信度;所述预测结果包括预测类型、预测区域和置信度。
可理解地,获取一个所述预测结果,所述预测结果包括所述预测类型、所述预测区域和所述置信度,其中,所述预测类型、所述预测区域和所述置信度之间存在对应关系,例如:预测结果为{“刮擦”,(10,20),(10,60),(50,20),(50,60),“95.5%”},其中预测类型为“刮擦”,预测区域为(10,20),(10,60),(50,20),(50,60)围成的矩形区域,置信度为“95.5%”。
S302,通过GIOU方法,根据所有所述预测区域、所有所述预测类型和所有所述置信度,确定每个所述预测区域对应的GIOU预测值。
可理解地,通过所述GIOU方法,计算出各所述预测区域之间的GIOU预测值,即将一个所述预测区域与其他任一个所述预测区域进行所述GIOU方法的计算得出该所述预测区域与该任一个预测区域的GIOU预测值,将对应相同的所述预测区域的GIOU预测值进行取最大值,所述GIOU预测值的为-1至1的范围,其中,所述GIOU预测值靠近-1时,表明两个区域远离,说明此区域的准确率低,所述GIOU预测值靠近1时,表明两个区域接近重合,说明此区域的准确率高。
S303,通过soft-NMS算法,根据所有所述GIOU预测值确定置信阈值。
可理解地,所述soft-NMS算法为通过高斯加权方式对所有所述预测结果进行计算获得所有所述预测结果对应的置信阈值,所述soft-NMS算法对于重叠部分的相邻预测预测区域设置一个高斯衰减函数,从而确定出合适的置信阈值,所述Soft-NMS算法对现有物体检测算法在多个重叠物体检测的平均准确率有显著的提升,通过所述soft-NMS算法能够确定出合适的置信阈值,能够避免粗鲁地删除一些较大GIOU预测值。
S304,获取所有所述置信度大于所述置信阈值对应的所述预测结果,并将所有所述置信度大于所述置信阈值对应的所述预测结果确定为所述识别结果。
可理解地,将所述置信度大于所述置信阈值对应的所述预测结果标记为所述识别结果。
本申请实现了通过GIOU方法和soft-NMS算法,对所有所述预测结果进行筛选获得识别结果,能够合理去除重复且置信度低的预测结果,让车损检测模型的准确率更高,提升了识别的可靠性。
S40,通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失 值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值。
可理解地,所述GIOU损伤算法为先获取所述样本识别区域与所述矩形区域的GIOU值,再将一减去该GIOU值的算法,即所述GIOU损伤算法中的损失函数L=1-Z,其中,L为所述第一损失值,Z为所述矩形区域和所述样本识别区域的GIOU值,通过GIOU损失算法,将所述矩形区域和所述样本识别区域输入所述损失函数中,计算出所述第一损失值;所述多分类交叉熵方法为通过交叉熵算法对多个车损标签类型进行概率预测的方法,运用所述交叉熵算法,将所述车损标签类型和所述样本车损类型输入所述交叉熵算法中的交叉熵函数,计算出所述第二损失值。
其中,所述第一损失值表明了所述矩形区域与所述样本识别区域之间的差距,所述第二损失值表明了所述车损标签类型与所述样本车损类型之间的差距。
在一实施例中,如图6所示,所述步骤S40中,即所述通过GIOU方法,根据所述矩形区域和所述样本识别区域确定出第一损失值,包括:
S401,获取所述矩形区域和所述样本识别区域。
可理解地,所述矩形区域为通过一个最小面积的矩形框能覆盖损伤位置的坐标区域范围,所述样本识别区域为在所有所述预测结果中超过置信阈值对应的预测结果中的矩形区域。
S402,通过IOU算法,计算出所述样本识别区域距离所述矩形区域的IOU值。
可理解地,所述IOU算法为所述矩形区域的面积和所述样本识别区域的面积的交集和并集的比值,所述IOU算法的函数公式为
Figure PCTCN2020120758-appb-000002
其中,I为所述样本识别区域距离所述矩形区域的IOU值,E为所述矩形区域的面积,F为所述样本识别区域的面积,|E∪F|为所述矩形区域的面积和所述样本识别区域的面积的并集,|E∩F|为所述矩形区域的面积和所述样本识别区域的面积的交集。
S403,根据所述矩形区域和所述样本识别区域,确定最小覆盖区域。
可理解地,通过所述矩形区域的矩形坐标和所述样本识别区域的矩形坐标,即获取各个坐标点,一个所述坐标点包括一个横坐标值和一个纵坐标值,从所有的坐标点中提取出所有横坐标值中的横坐标最大值和横坐标最小值,以及从所有的坐标点中提取出所有纵坐标值中的纵坐标最大值和纵坐标最小值,将所述横坐标最大值、所述横坐标最小值、所述纵坐标最大值和所述纵坐标最小值进行组合,确定出所述最小覆盖区域的矩形坐标中的四个坐标点,例如:矩形区域的矩形坐标为(10,20),(10,60),(50,20),(50,60);样本识别区域的矩形坐标为(35,15),(35,40),(80,15),(80,40);则横坐标最大值为80、横坐标最小值为10、纵坐标最大值为60和纵坐标最小值为15,从而最小覆盖区域的矩形坐标为(10,15),(10,60),(80,15),(80,60)。
S404,根据所述最小覆盖区域、所述矩形区域和所述样本识别区域,确定未占用区域。
可理解地,从所述最小覆盖区域中去除掉所述矩形区域和所述样本识别区域之后剩下的区域就为所述未占用区域。
S405,获取所述未占用区域与所述最小覆盖区域的比值,并将所述未占用区域与所述最小覆盖区域的比值确定为非占用比。
可理解地,获取所述未占用区域的面积,即通过所述未占用区域的矩形坐标计算出所述未占用区域的面积,再获取所述最小覆盖区域的面积,即通过所述最小覆盖区域的矩形坐标计算出所述最小覆盖区域的面积,从而得到所述未占用区域与所述最小覆盖区域的比值,即所述为占用区域的面积与所述最小覆盖区域的面积的比值,将此比值标记为所述非占用比。
S406,通过所述GIOU损失算法,根据所述非占用比和所述样本识别区域距离所述矩 形区域的IOU值,计算出所述样本识别区域对应的所述第一损失值。
可理解地,所述GIOU损伤算法中的损失函数L=1-Z,其中,L为所述第一损失值,Z为所述矩形区域和所述样本识别区域的GIOU值,所述矩形区域和所述样本识别区域的GIOU值通过Z=G-H获得,其中,Z为所述矩形区域和所述样本识别区域的GIOU值,G为所述样本识别区域距离所述矩形区域的IOU值,H为所述非占用比。
本申请实现了通过GIOU损失算法计算出第一损失值,提供了回归损失的方向,让所述车损检测模型向更优的识别方向进行识别,从而让样本识别区域向矩形区域靠拢,提高了识别准确率,且减少了训练时间。
S50,根据所述第一损失值和所述第二损失值,确定总损失值。
可理解地,所述损失值可以通过所述第一损失值与所述第二损失值进行加权平均法获得,将所述第一损失值和所述第二损失值输入预设的损失模型,通过所述损失模型中的总损失函数计算出所述总损失值;所述总损失函数为:
L T=w 1×M1+w 2×M2
其中,
M1为第一损失值;
M2为第二损失值;
w 1为第一损失值的权重;
w 2为第二损失值的权重。
S60,在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
可理解地,所述收敛条件可以为所述总损失值经过了10000次计算后值为很小且不会再下降的条件,即在所述总损失值经过10000次计算后值为很小且不会再下降时,停止训练,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型;所述收敛条件也可以为所述总损失值小于设定阈值的条件,即在所述总损失值小于设定阈值时,停止训练,并将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
如此,在所述总损失值未达到预设的收敛条件时,不断更新迭代所述车损检测模型的初始参数,可以不断向准确的结果靠拢,让识别的准确率越来越高。
在一实施例中,如图3所示,所述步骤S50之后,即所述根据所述第一损失值和所述第二损失值,确定总损失值之后,还包括:
S70,在所述总损失值达到预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
可理解地,在所述总损失值达到预设的收敛条件时,说明所述总损失值已经达到最优的结果,此时所述损伤检测模型已经收敛,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型,如此,根据所述车损样本集中的车损样本图像,通过不断训练获得训练完成的车损检测模型,能够提升识别的准确率和可靠性。
本申请通过包含车损样本图像的车损样本集输入车损检测模型进行训练,所述车损样本图像包括车损原始图像和车损转换图像,所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;通过基于InceptionV4模型架构的车损检测模型对车损样本图像车进行损纹理特征的提取,获取至少一个的预测结果;通过GIOU方法和soft-NMS算法,获取识别结果;通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;根据所述第一损失值和所述第二损失值,确定总损失值;在总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练 完成的车损检测模型,因此提供了一种车损检测模型训练方法,通过引入车损转换图像和基于InceptionV4模型进行架构,并且通过GIOU方法、soft-NMS算法和GIOU损失算法进行训练能够减少样本收集数量及提升了识别准确性和可靠性,实现了准确地、快速地识别出包含的损伤位置的图像中的车损类型和车损区域,减少了成本,提高了训练效率。
本申请提供的车损检测方法,可应用在如图1的应用环境中,其中,客户端(计算机设备)通过网络与服务器进行通信。其中,客户端(计算机设备)包括但不限于为各种个人计算机、笔记本电脑、智能手机、平板电脑、摄像头和便携式可穿戴设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一实施例中,如图7示,提供一种车损检测方法,其技术方案主要包括以下步骤S100-S200:
S100,接收到车损检测指令,获取车损图像;
可理解地,在车辆发生交通事故后,车辆会留下损伤的痕迹,保险公司的工作人员会拍摄交通事故的相关照片,这些照片包括车辆损伤的照片,工作人员将车辆损伤的照片上传至服务器,以触发所述车损检测指令,获取所述车损检测指令中含有的所述车损图像,所述车损图像为拍摄的车辆损伤的照片。
S200,将所述车损图像输入上述训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
可理解地,只需将所述车损图像输入训练完成的车损检测模型,通过所述寻上检测模型进行所述车损纹理特征的提取,就可以得到所述最终结果,加快了识别速度,从而提高了识别效率。
本申请通过获取车损图像,将所述车损图像输入上述训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的包含有车损类型和车损区域的最终结果;所述最终结果表征了所述车损图像中的所有车损位置的车损类型和车损区域,如此,提高了识别速度,从而提高了识别效率,减少了成本,提高了客户满意度。
在一实施例中,提供一种车损检测模型训练装置,该车损检测模型训练装置与上述实施例中车损检测模型训练方法一一对应。如图8所示,该车损检测模型训练装置包括获取模块11、输入模块12、识别模块13、确定模块14、损失模块15和迭代模块16。各功能模块详细说明如下:
获取模块11,用于获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;
输入模块12,用于将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型;
识别模块13,用于通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域;
确定模块14,用于通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;
损失模块15,用于根据所述第一损失值和所述第二损失值,确定总损失值;
迭代模块16,用于在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
在一实施例中,所述损失模块15包括:
收敛模块,用于在所述总损失值达到预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
在一实施例中,所述获取模块11包括:
第一获取单元,用于获取所述车损原始图像和与所述车损原始图像关联的所述车损标签组;
分离单元,用于通过图像预处理模型将所述车损原始图像分离,分离出红色通道的红色通道图像、绿色通道的绿色通道图像和蓝色通道的蓝色通道图像;
处理单元,用于通过图像预处理模型,对所述红色通道图像进行随机数值累加处理,得到红色加工通道图像,同时对所述绿色通道图像进行随机数值累加处理,得到绿色加工通道图像,以及对所述蓝色通道图像进行随机数值累加处理,得到蓝色加工通道图像;
输入单元,用于将所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像输入所述图像预处理模型中的六角锥体颜色空间模型;
转换单元,用于通过所述六角锥体颜色空间模型对所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像进行转换,得到所述车损转换图像;其中,所述车损转换图像包括色调通道的色调通道图像、饱和度通道的饱和度通道图像和明度通道的明度通道图像;
第一确定单元,用于将所述车损原始图像关联的车损标签组确定为所述车损转换图像关联的车损标签组。
在一实施例中,所述识别模块13包括:
第二获取单元,用于获取每个所述预测结果中的所述预测区域、与所述预测区域对应的所述预测类型和与所述预测区域对应的置信度;所述预测结果包括预测类型、预测区域和置信度;
计算单元,用于通过GIOU方法,根据所有所述预测区域、所有所述预测类型和所有所述置信度,确定每个所述预测区域对应的GIOU预测值;
第二确定单元,用于通过soft-NMS算法,根据所有所述GIOU预测值确定置信阈值;
筛选单元,用于获取所有所述置信度大于所述置信阈值对应的所述预测结果,并将所有所述置信度大于所述置信阈值对应的所述预测结果确定为所述识别结果。
在一实施例中,所述计算单元包括:
获取子单元,用于获取所述矩形区域和所述样本识别区域;
计算子单元,用于通过IOU算法,计算出所述样本识别区域距离所述矩形区域的IOU值;
确定子单元,用于根据所述矩形区域和所述样本识别区域,确定最小覆盖区域;
识别子单元,用于根据所述最小覆盖区域、所述矩形区域和所述样本识别区域,确定未占用区域;
非占比子单元,用于获取所述未占用区域与所述最小覆盖区域的比值,并将所述未占用区域与所述最小覆盖区域的比值确定为非占用比;
输出子单元,用于通过所述GIOU损失算法,根据所述非占用比和所述样本识别区域距离所述矩形区域的IOU值,计算出所述样本识别区域对应的所述第一损失值。
关于车损检测模型训练装置的具体限定可以参见上文中对于车损检测模型训练方法的限定,在此不再赘述。上述车损检测模型训练装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中, 也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一实施例中,提供一种车损检测装置,该车损检测装置与上述实施例中车损检测方法一一对应。如图9所示,该车损检测装置包括获取模块101和检测模块102。各功能模块详细说明如下:
接收模块101,用于接收到车损检测指令,获取车损图像;
检测模块102,用于将所述车损图像输入如上述车损检测模型训练方法训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
关于车损检测装置的具体限定可以参见上文中对于车损检测方法的限定,在此不再赘述。上述车损检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图10所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括可读存储介质、内存储器。该可读存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为可读存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种车损检测模型训练方法,或者车损检测方法。本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现上述实施例中车损检测模型训练方法,或者处理器执行计算机可读指令时实现上述实施例中车损检测方法。
在一个实施例中,提供了一个或多个存储有计算机可读指令的可读存储介质,本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质;该可读存储介质上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现上述实施例中车损检测模型训练方法,或者计算机程序被处理器执行时实现上述实施例中车损检测方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质或易失性可读存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上 描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种车损检测模型训练方法,其中,包括:
    获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;
    将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型;
    通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域;
    通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;
    根据所述第一损失值和所述第二损失值,确定总损失值;
    在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
  2. 如权利要求1所述的车损检测模型训练方法,其中,所述将所述车损样本图像输入含有初始参数的车损检测模型之前,包括:
    通过迁移学习,获取训练完成的InceptionV4模型的所有参数,将所有所述参数确定为所述车损检测模型中的所述初始参数。
  3. 如权利要求1所述的车损检测模型训练方法,其中,所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得,包括:
    获取所述车损原始图像和与所述车损原始图像关联的所述车损标签组;
    通过图像预处理模型将所述车损原始图像分离,分离出红色通道的红色通道图像、绿色通道的绿色通道图像和蓝色通道的蓝色通道图像;
    通过图像预处理模型,对所述红色通道图像进行随机数值累加处理,得到红色加工通道图像,同时对所述绿色通道图像进行随机数值累加处理,得到绿色加工通道图像,以及对所述蓝色通道图像进行随机数值累加处理,得到蓝色加工通道图像;
    将所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像输入所述图像预处理模型中的六角锥体颜色空间模型;
    通过所述六角锥体颜色空间模型对所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像进行转换,得到所述车损转换图像;其中,所述车损转换图像包括色调通道的色调通道图像、饱和度通道的饱和度通道图像和明度通道的明度通道图像;
    将所述车损原始图像关联的车损标签组确定为所述车损转换图像关联的车损标签组。
  4. 如权利要求1所述的车损检测模型训练方法,其中,所述通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所述预测结果进行筛选获得的识别结果,包括:
    获取每个所述预测结果中的所述预测区域、与所述预测区域对应的所述预测类型和与所述预测区域对应的置信度;所述预测结果包括预测类型、预测区域和置信度;
    通过GIOU方法,根据所有所述预测区域、所有所述预测类型和所有所述置信度,确定每个所述预测区域对应的GIOU预测值;
    通过soft-NMS算法,根据所有所述GIOU预测值确定置信阈值;
    获取所有所述置信度大于所述置信阈值对应的所述预测结果,并将所有所述置信度大 于所述置信阈值对应的所述预测结果确定为所述识别结果。
  5. 如权利要求4所述的车损检测模型训练方法,其中,所述通过GIOU方法,根据所述矩形区域和所述样本识别区域确定出第一损失值,包括:
    获取所述矩形区域和所述样本识别区域;
    通过IOU算法,计算出所述样本识别区域距离所述矩形区域的IOU值;
    根据所述矩形区域和所述样本识别区域,确定最小覆盖区域;
    根据所述最小覆盖区域、所述矩形区域和所述样本识别区域,确定未占用区域;
    获取所述未占用区域与所述最小覆盖区域的比值,并将所述未占用区域与所述最小覆盖区域的比值确定为非占用比;
    通过所述GIOU损失算法,根据所述非占用比和所述样本识别区域距离所述矩形区域的IOU值,计算出所述样本识别区域对应的所述第一损失值。
  6. 一种车损检测方法,其中,包括:
    接收到车损检测指令,获取车损图像;
    将所述车损图像输入如权利要求1至5任一项所述车损检测模型训练方法训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
  7. 一种车损检测模型训练装置,其中,包括:
    获取模块,用于获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;
    输入模块,用于将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型;
    识别模块,用于通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域;
    确定模块,用于通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;
    损失模块,用于根据所述第一损失值和所述第二损失值,确定总损失值;
    迭代模块,用于在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
  8. 一种车损检测装置,其中,包括:
    接收模块,用于接收到车损检测指令,获取车损图像;
    检测模块,用于将所述车损图像输入如权利要求1至5任一项所述车损检测模型训练方法训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组 包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;
    将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型;
    通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域;
    通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;
    根据所述第一损失值和所述第二损失值,确定总损失值;
    在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
  10. 如权利要求9所述的计算机设备,其中,所述将所述车损样本图像输入含有初始参数的车损检测模型之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    通过迁移学习,获取训练完成的InceptionV4模型的所有参数,将所有所述参数确定为所述车损检测模型中的所述初始参数。
  11. 如权利要求9所述的计算机设备,其中,所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取所述车损原始图像和与所述车损原始图像关联的所述车损标签组;
    通过图像预处理模型将所述车损原始图像分离,分离出红色通道的红色通道图像、绿色通道的绿色通道图像和蓝色通道的蓝色通道图像;
    通过图像预处理模型,对所述红色通道图像进行随机数值累加处理,得到红色加工通道图像,同时对所述绿色通道图像进行随机数值累加处理,得到绿色加工通道图像,以及对所述蓝色通道图像进行随机数值累加处理,得到蓝色加工通道图像;
    将所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像输入所述图像预处理模型中的六角锥体颜色空间模型;
    通过所述六角锥体颜色空间模型对所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像进行转换,得到所述车损转换图像;其中,所述车损转换图像包括色调通道的色调通道图像、饱和度通道的饱和度通道图像和明度通道的明度通道图像;
    将所述车损原始图像关联的车损标签组确定为所述车损转换图像关联的车损标签组。
  12. 如权利要求9所述的计算机设备,其中,所述通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所述预测结果进行筛选获得的识别结果,包括:
    获取每个所述预测结果中的所述预测区域、与所述预测区域对应的所述预测类型和与所述预测区域对应的置信度;所述预测结果包括预测类型、预测区域和置信度;
    通过GIOU方法,根据所有所述预测区域、所有所述预测类型和所有所述置信度,确定每个所述预测区域对应的GIOU预测值;
    通过soft-NMS算法,根据所有所述GIOU预测值确定置信阈值;
    获取所有所述置信度大于所述置信阈值对应的所述预测结果,并将所有所述置信度大于所述置信阈值对应的所述预测结果确定为所述识别结果。
  13. 如权利要求12所述的计算机设备,其中,所述通过GIOU方法,根据所述矩形区域和所述样本识别区域确定出第一损失值,包括:
    获取所述矩形区域和所述样本识别区域;
    通过IOU算法,计算出所述样本识别区域距离所述矩形区域的IOU值;
    根据所述矩形区域和所述样本识别区域,确定最小覆盖区域;
    根据所述最小覆盖区域、所述矩形区域和所述样本识别区域,确定未占用区域;
    获取所述未占用区域与所述最小覆盖区域的比值,并将所述未占用区域与所述最小覆盖区域的比值确定为非占用比;
    通过所述GIOU损失算法,根据所述非占用比和所述样本识别区域距离所述矩形区域的IOU值,计算出所述样本识别区域对应的所述第一损失值。
  14. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时还实现如下步骤:
    接收到车损检测指令,获取车损图像;
    将所述车损图像输入通过车损检测模型训练方法训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
  15. 一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取车损样本集;所述车损样本集包括车损样本图像,所述车损样本图像包括车损原始图像和车损转换图像,一个所述车损样本图像与一个车损标签组关联;所述车损标签组包括车损标签类型和矩形区域;所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得;
    将所述车损样本图像输入含有初始参数的车损检测模型,通过所述车损检测模型提取所述车损样本图像中的车损纹理特征,获取所述车损检测模型根据提取的所述车损纹理特征输出的至少一个的预测结果;所述车损检测模型为基于InceptionV4模型架构的深度卷积神经网络模型;
    通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所有所述预测结果进行筛选获得的识别结果;所述识别结果包括样本车损类型和样本识别区域;
    通过GIOU损失算法,根据所述矩形区域和所述样本识别区域确定出第一损失值,同时通过多分类交叉熵方法,根据所述车损标签类型和所述样本车损类型确定出第二损失值;
    根据所述第一损失值和所述第二损失值,确定总损失值;
    在所述总损失值未达到预设的收敛条件时,迭代更新所述车损检测模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述车损检测模型记录为训练完成的车损检测模型。
  16. 如权利要求15所述的可读存储介质,其中,所述将所述车损样本图像输入含有初始参数的车损检测模型之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    通过迁移学习,获取训练完成的InceptionV4模型的所有参数,将所有所述参数确定为所述车损检测模型中的所述初始参数。
  17. 如权利要求15所述的可读存储介质,其中,所述车损转换图像为所述车损原始图像通过图像预处理模型进行随机数值累加后并转换获得,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取所述车损原始图像和与所述车损原始图像关联的所述车损标签组;
    通过图像预处理模型将所述车损原始图像分离,分离出红色通道的红色通道图像、绿色通道的绿色通道图像和蓝色通道的蓝色通道图像;
    通过图像预处理模型,对所述红色通道图像进行随机数值累加处理,得到红色加工通道图像,同时对所述绿色通道图像进行随机数值累加处理,得到绿色加工通道图像,以及 对所述蓝色通道图像进行随机数值累加处理,得到蓝色加工通道图像;
    将所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像输入所述图像预处理模型中的六角锥体颜色空间模型;
    通过所述六角锥体颜色空间模型对所述红色加工通道图像、所述绿色加工通道图像和所述蓝色加工通道图像进行转换,得到所述车损转换图像;其中,所述车损转换图像包括色调通道的色调通道图像、饱和度通道的饱和度通道图像和明度通道的明度通道图像;
    将所述车损原始图像关联的车损标签组确定为所述车损转换图像关联的车损标签组。
  18. 如权利要求15所述的可读存储介质,其中,所述通过GIOU方法和soft-NMS算法,获取所述车损检测模型对所述预测结果进行筛选获得的识别结果,包括:
    获取每个所述预测结果中的所述预测区域、与所述预测区域对应的所述预测类型和与所述预测区域对应的置信度;所述预测结果包括预测类型、预测区域和置信度;
    通过GIOU方法,根据所有所述预测区域、所有所述预测类型和所有所述置信度,确定每个所述预测区域对应的GIOU预测值;
    通过soft-NMS算法,根据所有所述GIOU预测值确定置信阈值;
    获取所有所述置信度大于所述置信阈值对应的所述预测结果,并将所有所述置信度大于所述置信阈值对应的所述预测结果确定为所述识别结果。
  19. 如权利要求18所述的可读存储介质,其中,所述通过GIOU方法,根据所述矩形区域和所述样本识别区域确定出第一损失值,包括:
    获取所述矩形区域和所述样本识别区域;
    通过IOU算法,计算出所述样本识别区域距离所述矩形区域的IOU值;
    根据所述矩形区域和所述样本识别区域,确定最小覆盖区域;
    根据所述最小覆盖区域、所述矩形区域和所述样本识别区域,确定未占用区域;
    获取所述未占用区域与所述最小覆盖区域的比值,并将所述未占用区域与所述最小覆盖区域的比值确定为非占用比;
    通过所述GIOU损失算法,根据所述非占用比和所述样本识别区域距离所述矩形区域的IOU值,计算出所述样本识别区域对应的所述第一损失值。
  20. 一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    接收到车损检测指令,获取车损图像;
    将所述车损图像输入通过车损检测模型训练方法训练完成的车损检测模型,通过所述车损检测模型提取车损纹理特征,获取所述车损检测模型根据所述车损纹理特征输出的最终结果;所述最终结果包括车损类型和车损区域,所述最终结果表征了所述车损图像中的所有损伤位置的车损类型和车损区域。
PCT/CN2020/120758 2020-06-08 2020-10-14 车损检测模型训练、车损检测方法、装置、设备及介质 WO2021135500A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010513050.5A CN111680746B (zh) 2020-06-08 2020-06-08 车损检测模型训练、车损检测方法、装置、设备及介质
CN202010513050.5 2020-06-08

Publications (1)

Publication Number Publication Date
WO2021135500A1 true WO2021135500A1 (zh) 2021-07-08

Family

ID=72435500

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120758 WO2021135500A1 (zh) 2020-06-08 2020-10-14 车损检测模型训练、车损检测方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN111680746B (zh)
WO (1) WO2021135500A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628206A (zh) * 2021-08-25 2021-11-09 深圳市捷顺科技实业股份有限公司 一种车牌检测方法、装置、介质
CN113723400A (zh) * 2021-08-23 2021-11-30 中南大学 一种基于红外图像的电解槽极板故障识别方法、系统、终端及可读存储介质
CN114898155A (zh) * 2022-05-18 2022-08-12 平安科技(深圳)有限公司 车辆定损方法、装置、设备及存储介质
CN115512341A (zh) * 2022-09-15 2022-12-23 粤丰科盈智能投资(广东)有限公司 一种基于高斯分布拟合的目标检测方法、装置及计算机介质
CN115527189A (zh) * 2022-11-01 2022-12-27 杭州枕石智能科技有限公司 车位状态的检测方法、终端设备及计算机可读存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111680746B (zh) * 2020-06-08 2023-08-04 平安科技(深圳)有限公司 车损检测模型训练、车损检测方法、装置、设备及介质
CN112101550B (zh) * 2020-09-25 2024-05-03 平安科技(深圳)有限公司 分诊融合模型训练方法、分诊方法、装置、设备及介质
CN112541587A (zh) * 2020-11-19 2021-03-23 西人马帝言(北京)科技有限公司 一种识别模型训练方法、装置、设备及计算机存储介质
CN112668462B (zh) * 2020-12-25 2024-05-07 平安科技(深圳)有限公司 车损检测模型训练、车损检测方法、装置、设备及介质
CN112926437B (zh) * 2021-02-22 2024-06-11 深圳中科飞测科技股份有限公司 检测方法及装置、检测设备和存储介质
CN112907576B (zh) * 2021-03-25 2024-02-02 平安科技(深圳)有限公司 车辆损伤等级检测方法、装置、计算机设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215027A (zh) * 2018-10-11 2019-01-15 平安科技(深圳)有限公司 一种基于神经网络的车辆定损方法、服务器及介质
US20190213689A1 (en) * 2017-04-11 2019-07-11 Alibaba Group Holding Limited Image-based vehicle damage determining method and apparatus, and electronic device
CN110363238A (zh) * 2019-07-03 2019-10-22 中科软科技股份有限公司 智能车辆定损方法、系统、电子设备及存储介质
CN110889428A (zh) * 2019-10-21 2020-03-17 浙江大搜车软件技术有限公司 图像识别方法、装置、计算机设备与存储介质
CN111680746A (zh) * 2020-06-08 2020-09-18 平安科技(深圳)有限公司 车损检测模型训练、车损检测方法、装置、设备及介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194398B (zh) * 2017-05-10 2018-09-25 平安科技(深圳)有限公司 车损部位的识别方法及系统
CN108734702A (zh) * 2018-04-26 2018-11-02 平安科技(深圳)有限公司 车损判定方法、服务器及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190213689A1 (en) * 2017-04-11 2019-07-11 Alibaba Group Holding Limited Image-based vehicle damage determining method and apparatus, and electronic device
CN109215027A (zh) * 2018-10-11 2019-01-15 平安科技(深圳)有限公司 一种基于神经网络的车辆定损方法、服务器及介质
CN110363238A (zh) * 2019-07-03 2019-10-22 中科软科技股份有限公司 智能车辆定损方法、系统、电子设备及存储介质
CN110889428A (zh) * 2019-10-21 2020-03-17 浙江大搜车软件技术有限公司 图像识别方法、装置、计算机设备与存储介质
CN111680746A (zh) * 2020-06-08 2020-09-18 平安科技(深圳)有限公司 车损检测模型训练、车损检测方法、装置、设备及介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723400A (zh) * 2021-08-23 2021-11-30 中南大学 一种基于红外图像的电解槽极板故障识别方法、系统、终端及可读存储介质
CN113628206A (zh) * 2021-08-25 2021-11-09 深圳市捷顺科技实业股份有限公司 一种车牌检测方法、装置、介质
CN114898155A (zh) * 2022-05-18 2022-08-12 平安科技(深圳)有限公司 车辆定损方法、装置、设备及存储介质
CN114898155B (zh) * 2022-05-18 2024-05-28 平安科技(深圳)有限公司 车辆定损方法、装置、设备及存储介质
CN115512341A (zh) * 2022-09-15 2022-12-23 粤丰科盈智能投资(广东)有限公司 一种基于高斯分布拟合的目标检测方法、装置及计算机介质
CN115512341B (zh) * 2022-09-15 2023-10-27 粤丰科盈智能投资(广东)有限公司 基于高斯分布拟合的目标检测方法、装置及计算机介质
CN115527189A (zh) * 2022-11-01 2022-12-27 杭州枕石智能科技有限公司 车位状态的检测方法、终端设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN111680746A (zh) 2020-09-18
CN111680746B (zh) 2023-08-04

Similar Documents

Publication Publication Date Title
WO2021135500A1 (zh) 车损检测模型训练、车损检测方法、装置、设备及介质
CN109543627B (zh) 一种判断驾驶行为类别的方法、装置、及计算机设备
CN111860147B (zh) 行人重识别模型优化处理方法、装置和计算机设备
CN112836687B (zh) 视频行为分割方法、装置、计算机设备及介质
CN110569721A (zh) 识别模型训练方法、图像识别方法、装置、设备及介质
CN106683073B (zh) 一种车牌的检测方法及摄像机和服务器
CN110765860A (zh) 摔倒判定方法、装置、计算机设备及存储介质
CN112949507A (zh) 人脸检测方法、装置、计算机设备及存储介质
WO2022252642A1 (zh) 基于视频图像的行为姿态检测方法、装置、设备及介质
CN110046577B (zh) 行人属性预测方法、装置、计算机设备和存储介质
CN111401196A (zh) 受限空间内自适应人脸聚类的方法、计算机装置及计算机可读存储介质
CN111935479A (zh) 一种目标图像确定方法、装置、计算机设备及存储介质
WO2022194079A1 (zh) 天空区域分割方法、装置、计算机设备和存储介质
CN112883983B (zh) 特征提取方法、装置和电子系统
CN113469092B (zh) 字符识别模型生成方法、装置、计算机设备和存储介质
CN111126208A (zh) 行人归档方法、装置、计算机设备及存储介质
WO2021189770A1 (zh) 基于人工智能的图像增强处理方法、装置、设备及介质
CN110942067A (zh) 文本识别方法、装置、计算机设备和存储介质
CN111428740A (zh) 网络翻拍照片的检测方法、装置、计算机设备及存储介质
CN116681687B (zh) 基于计算机视觉的导线检测方法、装置和计算机设备
CN112818960A (zh) 基于人脸识别的等待时长处理方法、装置、设备及介质
CN116403200A (zh) 基于硬件加速的车牌实时识别系统
CN110751623A (zh) 基于联合特征的缺陷检测方法、装置、设备及存储介质
CN111931688A (zh) 船只识别方法、装置、计算机设备及存储介质
Broetto et al. Heterogeneous feature models and feature selection applied to detection of street lighting lamps types and wattages

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910609

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20910609

Country of ref document: EP

Kind code of ref document: A1