WO2019205391A1 - Apparatus and method for generating vehicle damage classification model, and computer readable storage medium - Google Patents

Apparatus and method for generating vehicle damage classification model, and computer readable storage medium Download PDF

Info

Publication number
WO2019205391A1
WO2019205391A1 PCT/CN2018/102411 CN2018102411W WO2019205391A1 WO 2019205391 A1 WO2019205391 A1 WO 2019205391A1 CN 2018102411 W CN2018102411 W CN 2018102411W WO 2019205391 A1 WO2019205391 A1 WO 2019205391A1
Authority
WO
WIPO (PCT)
Prior art keywords
preset
sample
vehicle damage
classification model
training
Prior art date
Application number
PCT/CN2018/102411
Other languages
French (fr)
Chinese (zh)
Inventor
王健宗
王晨羽
马进
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019205391A1 publication Critical patent/WO2019205391A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Definitions

  • the present application relates to the field of vehicle loss detection technology, and in particular, to a device, a method, and a computer readable storage medium for generating a vehicle damage classification model.
  • the present application provides a device, a method, and a computer readable storage medium for generating a vehicle damage classification model, the main purpose of which is to solve the technical problem that the vehicle can be fixed by the convolutional neural network model on the mobile device in the prior art.
  • the present application provides a device for generating a vehicle damage classification model, the device comprising a memory and a processor, wherein the memory stores a model generation program executable on the processor, the model generation program The following steps are implemented when executed by the processor:
  • a vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
  • the present application further provides a method for generating a vehicle damage classification model, the method comprising:
  • a vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
  • the present application further provides a computer readable storage medium having a model generation program stored thereon, the model generation program being executable by one or more processors to implement The steps of the method for generating a vehicle damage classification model as described above.
  • FIG. 1 is a schematic diagram of a preferred embodiment of a device for generating a vehicle damage classification model of the present application
  • FIG. 2 is a schematic diagram of a program module of a model generation program in an embodiment of a device for generating a vehicle damage classification model according to the present application;
  • FIG. 3 is a flow chart of a preferred embodiment of a method for generating a vehicle damage classification model of the present application.
  • the application provides a device for generating a vehicle damage classification model.
  • FIG. 1 it is a schematic diagram of a preferred embodiment of a device for generating a vehicle damage classification model of the present application.
  • the device 1 for generating a vehicle damage classification model may be a PC (Personal Computer), or may be a terminal device such as a smart phone, a tablet computer, or a portable computer.
  • the vehicle damage classification model generating apparatus 1 includes at least a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (for example, an SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like.
  • the memory 11 may in some embodiments be an internal storage unit of the generating device 1 of the vehicle damage classification model, such as the hard disk of the generating device 1 of the vehicle damage classification model.
  • the memory 11 may also be an external storage device of the vehicle damage classification model generating device 1 in other embodiments, such as a plug-in hard disk equipped on the generating device 1 of the vehicle damage classification model, and a smart memory card (Smart Media Card, SMC) ), Secure Digital (SD) card, Flash Card, etc.
  • Smart Media Card Smart Media Card, SMC
  • SD Secure Digital
  • the memory 11 may also include an internal storage unit of the generating device 1 of the vehicle damage classification model and an external storage device.
  • the memory 11 can be used not only for storing application software and various types of data of the generation device 1 installed in the vehicle damage classification model, such as code of the model generation program 01, but also for temporarily storing data that has been output or is to be output.
  • the processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data processing chip for running program code or processing stored in the memory 11. Data, such as execution model generation program 01 and the like.
  • CPU Central Processing Unit
  • controller microcontroller
  • microprocessor or other data processing chip for running program code or processing stored in the memory 11.
  • Data such as execution model generation program 01 and the like.
  • Communication bus 13 is used to implement connection communication between these components.
  • the network interface 14 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface), and is typically used to generate a communication connection between the device 1 and other electronic devices.
  • a standard wired interface such as a WI-FI interface
  • Figure 1 shows only the generating device 1 of the vehicle damage classification model with the components 11-14 and the model generation program 01, but it should be understood that not all of the illustrated components are required to be implemented, and alternative implementations may be more or more Less components.
  • the device 1 may further include a user interface
  • the user interface may include a display
  • an input unit such as a keyboard
  • the optional user interface may further include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like.
  • the display may also be appropriately referred to as a display screen or a display unit for displaying information processed in the vehicle damage classification model generating device 1 and a user interface for displaying visualization.
  • the device 1 may also comprise a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged in an array.
  • the area of the display of the device 1 may be the same as or different from the area of the touch sensor.
  • a display is stacked with the touch sensor to form a touch display. The device 1 detects a user-triggered touch operation based on the touch display screen.
  • the device 1 may further include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like.
  • sensors such as light sensors, motion sensors, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein if the device 1 is a mobile terminal, the ambient light sensor may adjust the brightness of the display screen according to the brightness of the ambient light, and the proximity sensor may move to the ear at the mobile terminal. Turn off the display and/or backlight.
  • the gravity acceleration sensor can detect the magnitude of acceleration in each direction (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, Related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; of course, the mobile terminal can also be equipped with other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. No longer.
  • a model generation program is stored in the memory 11; when the processor 12 executes the model generation program stored in the memory 11, the following steps are implemented:
  • a preset number of original car damage photos are prepared for each car damage part, and the original car damage photo is marked and preprocessed as a sample picture to construct a sample library.
  • a preset number of original vehicle damage photos need to be collected for each vehicle damage part, wherein the vehicle damage parts include but are not limited to: a left front door, a right front door, a left fender, a right fender, a front bumper, Rear bumper, etc.
  • the original car damage photo can be obtained from the historical vehicle damage file, and the damage information corresponding to the original car damage photo is obtained from it, and the damage degree of the original car damage photo is marked according to the loss information, for example, if the original car damage is If the damage information corresponding to the photo is intact, the original car damage photo is marked as 0; if the original damage information corresponds to the required damage information, the original car damage photo is marked as 1; If the damage information corresponding to the photo is to be repaired, the original car damage photo is marked as 2; if the original damage information corresponding to the original car damage photo is to be replaced, the original car damage photo is marked as 3.
  • the preset number can be set as needed, and the more the number of sample photos, the more accurate the classification result of the trained classification model is.
  • the photo is preprocessed and used as a sample image. Specifically, the original car damage photo after the annotation is trimmed according to the preset size, and the first sample image is generated, and the first sample image is generated. Performing data enhancement processing to generate a second sample picture; constructing a sample library based on the first sample picture and the second sample picture.
  • the size of all the collected original car damage photos is uniformly cut into a preset size.
  • the preset size may be 224 ⁇ 224, and the cropped processed photo is used as the first sample image to perform data on the first sample.
  • the means for enhancing the processing includes random clipping, 90-degree inversion, and addition of Gaussian noise, etc., wherein each of the first sample pictures can be processed by the above various enhancement processing means to obtain a plurality of pictures. The obtained picture is used as a second sample picture, and the sample library is constructed based on the first sample picture and the second sample picture.
  • the sample picture in the sample library is used as input data of the vgg network, and the vgg network is pruned according to a preset pruning algorithm and a preset compression ratio.
  • I n and W n represent the convolution operation of the nth layer in the vgg network, where: I n represents the input of the nth layer in the network, which has C channels; W n represents a set of convolution kernels with a size of k*k Filter, which is the filter of the nth layer.
  • the output data of the nth layer is the input of the n+1th layer.
  • Pruning operation object is to remove some of the important W n filters, and when one of W n in the filter is removed, which I n + 1 and W n + 1 corresponding to the channel will be removed . Therefore, the function of this step is to select the collection of channels to be deleted from all the channels. Specifically, the greedy algorithm is used to calculate the set T of channels when the value of the following expression is the smallest, and the channel in this set is The deleted channel will delete the corresponding filter when deleting the channel.
  • C ⁇ (1-r)
  • C is the number of channels
  • x is the input training data
  • m is the number of training data
  • r is the compression ratio of the preset model.
  • the calculated T is a subset of an input channel. According to the above formula, the set of channels to be deleted by each convolution layer is calculated, and then the convolution layer in the network is pruned according to the calculation result. The number of channels remaining after pruning depends on the preset compression ratio.
  • the data in the sample library is iterated 1-2 times on the pruned network to fine tune and optimize the network.
  • the data in the sample library is again iterated 1-2 times on the pruned network to further optimize the pruned network.
  • the vgg network is a convolutional neural network, and for the convolutional neural network, 90% of the calculation amount is in the convolutional layer, so the compression operation on the convolutional layer can improve the calculation efficiency, and the compression ratio can be based on Users need to set, for example, set to 50%, then half of the channels can be removed from the network after pruning.
  • This pruning method has basically no effect on the accuracy of the vgg network, and the classification model constructed by the network can still maintain a high accuracy.
  • the fully connected layer occupies more parameters of the network, all the fully connected layers can be replaced by a global average pooling layer.
  • a vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
  • the vehicle damage classification model is constructed based on the pruned vgg network.
  • the model is a classification model based on convolutional neural network.
  • the classification model is trained using the sample pictures in the previously constructed sample library. Specifically, the sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1, for example, A preset ratio is 80%, and the second preset ratio is 20%.
  • the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed. That is, when the training result of the classification model is verified by using the verification set, if the accuracy rate is less than the preset accuracy rate, the sample pictures in the sample library are further divided into two according to the first preset ratio and the second preset ratio.
  • the final vehicle damage classification model can be applied to vehicle damage.
  • the photo to be determined is input to the model, and the output is the category of the corresponding damage degree.
  • the pruning process of the network is performed, so that the vgg network is compressed according to a certain ratio, and the compressed network can greatly reduce the parameter amount and improve the calculation speed while ensuring the accuracy of the classification.
  • the space required for the operation of the model is reduced, and the vehicle loss can be realized by the convolutional neural network model on the mobile device.
  • the device for generating a vehicle damage classification model prepares a preset number of original vehicle damage photos for each vehicle damage portion, and labels and preprocesses the original vehicle damage photos to construct a sample library as sample images, and these sample images are taken.
  • the vgg network is pruned according to a preset pruning algorithm and a preset compression ratio, and the redundant channel in the original vgg network is deleted, and the vehicle damage is constructed based on the pgg-processed vgg network.
  • a classification model that uses the sample images in the sample library to train the constructed model to obtain model parameters to generate a final vehicle damage classification model.
  • the splicing algorithm performs compression processing on the vgg network, and the compression rate of the pgg-processed vgg network relative to the original network reaches a preset compression ratio, which reduces the space occupied by the network, and is based on the simplified network.
  • the constructed vehicle damage classification model will speed up in training and testing, enabling vehicle loss determination through a convolutional neural network model on mobile devices.
  • the model generation program may also be divided into one or more modules, one or more modules being stored in the memory 11 and being processed by one or more processors (this embodiment is The processor 12) is executed to complete the present application.
  • the module referred to in the present application refers to a series of computer program instruction segments capable of performing a specific function for describing the execution process of the model generation program in the vehicle damage classification model generating device.
  • FIG. 2 it is a schematic diagram of a program module of a model generation program in an embodiment of a device for generating a vehicle damage classification model of the present application.
  • the model generation program can be divided into a sample construction module 10 and network compression.
  • Module 20 and model generation module 30, illustratively:
  • the sample building module 10 is configured to: prepare a preset number of original vehicle damage photos for each vehicle damage part, label and preprocess the original vehicle damage photo as a sample picture, and construct a sample library;
  • the network compression module 20 is configured to: use the sample image in the sample library as input data of the vgg network, perform pruning processing on the vgg network according to a preset pruning algorithm and a preset compression ratio;
  • the model generation module 30 is configured to: construct a vehicle damage classification model based on the peg-processed vgg network, and train the vehicle damage classification model using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
  • the present application also provides a method for generating a vehicle damage classification model.
  • FIG. 3 it is a flowchart of a preferred embodiment of a method for generating a vehicle damage classification model of the present application. The method can be performed by a device that can be implemented by software and/or hardware.
  • the method for generating a vehicle damage classification model includes:
  • step S10 a preset number of original car damage photos are prepared for each car damage part, and the original car damage photo is marked and preprocessed as a sample picture to construct a sample library.
  • a preset number of original vehicle damage photos need to be collected for each vehicle damage part, wherein the vehicle damage parts include but are not limited to: a left front door, a right front door, a left fender, a right fender, a front bumper, Rear bumper, etc.
  • the original car damage photo can be obtained from the historical vehicle damage file, and the damage information corresponding to the original car damage photo is obtained from it, and the damage degree of the original car damage photo is marked according to the loss information, for example, if the original car damage is If the damage information corresponding to the photo is intact, the original car damage photo is marked as 0; if the original damage information corresponds to the required damage information, the original car damage photo is marked as 1; If the damage information corresponding to the photo is to be repaired, the original car damage photo is marked as 2; if the original damage information corresponding to the original car damage photo is to be replaced, the original car damage photo is marked as 3.
  • the preset number can be set as needed, and the more the number of sample photos, the more accurate the classification result of the trained classification model is.
  • the photo is preprocessed and used as a sample image. Specifically, the original car damage photo after the annotation is trimmed according to the preset size, and the first sample image is generated, and the first sample image is generated. Performing data enhancement processing to generate a second sample picture; constructing a sample library based on the first sample picture and the second sample picture.
  • the size of all the collected original car damage photos is uniformly cut into a preset size.
  • the preset size may be 224 ⁇ 224, and the cropped processed photo is used as the first sample image to perform data on the first sample.
  • the means for enhancing the processing includes random clipping, 90-degree inversion, and addition of Gaussian noise, etc., wherein each of the first sample pictures can be processed by the above various enhancement processing means to obtain a plurality of pictures. The obtained picture is used as a second sample picture, and the sample library is constructed based on the first sample picture and the second sample picture.
  • Step S20 The sample picture in the sample library is used as the input data of the vgg network, and the vgg network is pruned according to a preset pruning algorithm and a preset compression ratio.
  • I n and W n represent the convolution operation of the nth layer in the vgg network, where: I n represents the input of the nth layer in the network, which has C channels; W n represents a set of convolution kernels with a size of k*k Filter, which is the filter of the nth layer.
  • the output data of the nth layer is the input of the n+1th layer.
  • Pruning operation object is to remove some of the important W n filters, and when one of W n in the filter is removed, which I n + 1 and W n + 1 corresponding to the channel will be removed . Therefore, the function of this step is to select the collection of channels to be deleted from all the channels. Specifically, the greedy algorithm is used to calculate the set T of channels when the value of the following expression is the smallest, and the channel in this set is The deleted channel will delete the corresponding filter when deleting the channel.
  • C ⁇ (1-r)
  • C is the number of channels
  • x is the input training data
  • m is the number of training data
  • r is the compression ratio of the preset model.
  • the calculated T is a subset of an input channel. According to the above formula, the set of channels to be deleted by each convolution layer is calculated, and then the convolution layer in the network is pruned according to the calculation result. The number of channels remaining after pruning depends on the preset compression ratio.
  • the data in the sample library is iterated 1-2 times on the pruned network to fine tune and optimize the network.
  • the data in the sample library is again iterated 1-2 times on the pruned network to further optimize the pruned network.
  • the vgg network is a convolutional neural network, and for the convolutional neural network, 90% of the calculation amount is in the convolutional layer, so the compression operation on the convolutional layer can improve the calculation efficiency, and the compression ratio can be based on Users need to set, for example, set to 50%, then half of the channels can be removed from the network after pruning.
  • This pruning method has basically no effect on the accuracy of the vgg network, and the classification model constructed by the network can still maintain a high accuracy.
  • the fully connected layer occupies more parameters of the network, all the fully connected layers can be replaced by a global average pooling layer.
  • Step S30 constructing a vehicle damage classification model based on the peg-processed vgg network, and training the vehicle damage classification model using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
  • the vehicle damage classification model is constructed based on the pegged vgg network.
  • the model is a classification model based on convolutional neural network.
  • the classification model is trained using the sample pictures in the previously constructed sample library. Specifically, the sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1, for example, A preset ratio is 80%, and the second preset ratio is 20%.
  • the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed. That is, when the training result of the classification model is verified by using the verification set, if the accuracy rate is less than the preset accuracy rate, the sample pictures in the sample library are further divided into two according to the first preset ratio and the second preset ratio.
  • the final vehicle damage classification model can be applied to vehicle damage.
  • the photo to be determined is input to the model, and the output is the category of the corresponding damage degree.
  • the pruning process of the network is performed, so that the vgg network is compressed according to a certain ratio, and the compressed network can greatly reduce the parameter amount and improve the calculation speed while ensuring the accuracy of the classification.
  • the space required for the operation of the model is reduced, and the vehicle loss can be realized by the convolutional neural network model on the mobile device.
  • the method for generating a vehicle damage classification model proposed in this embodiment prepares a preset number of original vehicle damage photos for each vehicle damage portion, and labels and preprocesses the original vehicle damage photos to construct a sample library as sample images, and these sample images are taken.
  • the vgg network is pruned according to a preset pruning algorithm and a preset compression ratio, and the redundant channel in the original vgg network is deleted, and the vehicle damage is constructed based on the pgg-processed vgg network.
  • a classification model that uses the sample images in the sample library to train the constructed model to obtain model parameters to generate a final vehicle damage classification model.
  • the splicing algorithm performs compression processing on the vgg network, and the compression rate of the pgg-processed vgg network relative to the original network reaches a preset compression ratio, which reduces the space occupied by the network, and is based on the simplified network.
  • the constructed vehicle damage classification model will speed up in training and testing, enabling vehicle loss determination through a convolutional neural network model on mobile devices.
  • the embodiment of the present application further provides a computer readable storage medium, where the model readable program is stored, and the model generating program may be executed by one or more processors to implement the following operations:
  • a vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
  • the specific embodiment of the computer readable storage medium of the present application is substantially the same as the embodiment of the apparatus and method for generating a vehicle damage classification model, and will not be described herein.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM as described above). , a disk, an optical disk, including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Abstract

An apparatus for generating a vehicle damage classification model, comprising a memory and a processor, a model generating program capable of running on the processor being stored on the memory, and the program implementing the following steps when executed by the processor: preparing an original vehicle damage photograph for each vehicle damage part, using the original vehicle damage photograph as a sample image after labelling and pre-processing, and constructing a sample library (S10); using the images in the sample library as input data for a vgg network, and implementing pruning of the vgg network according to a preset pruning algorithm and a preset compression rate (S20); on the basis of the pruned vgg network, constructing a vehicle damage classification model, and using the sample images in the sample library to train the vehicle damage classification model in order to determine model parameters of the vehicle damage classification model (S30); Also provided are a method for generating the vehicle damage classification model, and a computer readable storage medium, for solving the technical problem in the prior art of being unable to implement vehicle damage assessment on a mobile device by means of a convolutional neural network model.

Description

车辆损伤分类模型的生成装置、方法及计算机可读存储介质Device, method and computer readable storage medium for generating vehicle damage classification model
本申请要求于2018年4月26日提交中国专利局,申请号为201810388000.1、发明名称为“车辆损伤分类模型的生成装置、方法及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201810388000.1, entitled "Generation Device, Method and Computer-Readable Storage Medium for Vehicle Damage Classification Model", filed on April 26, 2018, the entire disclosure of which is hereby incorporated by reference. The content is incorporated herein by reference.
技术领域Technical field
本申请涉及车辆定损技术领域,尤其涉及一种车辆损伤分类模型的生成装置、方法及计算机可读存储介质。The present application relates to the field of vehicle loss detection technology, and in particular, to a device, a method, and a computer readable storage medium for generating a vehicle damage classification model.
背景技术Background technique
目前,在车险理赔领域,为了提高理赔效率,很多车险公司在车险理赔系统中运用图像分类和识别技术对上传的理赔照片中的车辆和受损部位进行自动识别,然而,现有的图像分类和识别技术很多采用的是深度卷积神经网络模型,例如采用基于vgg网络构建的卷积神经网络模型来进行样本图片的识别与定损,但是现有的vgg网络的参数量过于庞大,占用内存过多,无法移植到移动设备内使用。导致无法实现在移动设备上通过卷积神经网络模型进行车辆定损。At present, in the field of auto insurance claims, in order to improve the efficiency of claim settlement, many auto insurance companies use image classification and recognition technology to automatically identify the vehicles and damaged parts in the uploaded claim photos in the auto insurance claims system. However, the existing image classification and Many of the recognition techniques use deep convolutional neural network models. For example, a convolutional neural network model based on vgg network is used to identify and determine the sample image. However, the parameters of the existing vgg network are too large and occupy memory. Many, can't be ported to mobile devices. This makes it impossible to achieve vehicle damage through a convolutional neural network model on mobile devices.
发明内容Summary of the invention
本申请提供一种车辆损伤分类模型的生成装置、方法及计算机可读存储介质,其主要目的在于解决现有技术中无法实现在移动设备上通过卷积神经网络模型进行车辆定损的技术问题。The present application provides a device, a method, and a computer readable storage medium for generating a vehicle damage classification model, the main purpose of which is to solve the technical problem that the vehicle can be fixed by the convolutional neural network model on the mobile device in the prior art.
为实现上述目的,本申请提供一种车辆损伤分类模型的生成装置,该装置包括存储器和处理器,所述存储器中存储有可在所述处理器上运行的模型生成程序,所述模型生成程序被所述处理器执行时实现如下步骤:To achieve the above object, the present application provides a device for generating a vehicle damage classification model, the device comprising a memory and a processor, wherein the memory stores a model generation program executable on the processor, the model generation program The following steps are implemented when executed by the processor:
为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库;Preparing a preset number of original vehicle damage photos for each vehicle damage part, labeling and pre-processing the original vehicle damage photos as a sample picture, and constructing a sample library;
将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝 算法和预设压缩率对所述vgg网络进行剪枝处理;Using the sample picture in the sample library as input data of the vgg network, performing pruning processing on the vgg network according to a preset pruning algorithm and a preset compression ratio;
基于剪枝处理后的vgg网络构建车辆损伤分类模型,并使用所述样本库中的样本图片训练所述车辆损伤分类模型,以确定该车辆损伤分类模型的模型参数。A vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
此外,为实现上述目的,本申请还提供一种车辆损伤分类模型的生成方法,该方法包括:In addition, to achieve the above object, the present application further provides a method for generating a vehicle damage classification model, the method comprising:
为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库;Preparing a preset number of original vehicle damage photos for each vehicle damage part, labeling and pre-processing the original vehicle damage photos as a sample picture, and constructing a sample library;
将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理;Using the sample picture in the sample library as input data of the vgg network, performing pruning processing on the vgg network according to a preset pruning algorithm and a preset compression ratio;
基于剪枝处理后的vgg网络构建车辆损伤分类模型,并使用所述样本库中的样本图片训练所述车辆损伤分类模型,以确定该车辆损伤分类模型的模型参数。A vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有模型生成程序,所述模型生成程序可被一个或者多个处理器执行,以实现如上所述的车辆损伤分类模型的生成方法的步骤。In addition, in order to achieve the above object, the present application further provides a computer readable storage medium having a model generation program stored thereon, the model generation program being executable by one or more processors to implement The steps of the method for generating a vehicle damage classification model as described above.
附图说明DRAWINGS
图1为本申请车辆损伤分类模型的生成装置较佳实施例的示意图;1 is a schematic diagram of a preferred embodiment of a device for generating a vehicle damage classification model of the present application;
图2为本申请车辆损伤分类模型的生成装置一实施例中模型生成程序的程序模块示意图;2 is a schematic diagram of a program module of a model generation program in an embodiment of a device for generating a vehicle damage classification model according to the present application;
图3为本申请车辆损伤分类模型的生成方法较佳实施例的流程图。3 is a flow chart of a preferred embodiment of a method for generating a vehicle damage classification model of the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
本申请提供一种车辆损伤分类模型的生成装置。参照图1所示,为本申请车辆损伤分类模型的生成装置较佳实施例的示意图。The application provides a device for generating a vehicle damage classification model. Referring to FIG. 1 , it is a schematic diagram of a preferred embodiment of a device for generating a vehicle damage classification model of the present application.
在本实施例中,车辆损伤分类模型的生成装置1可以是PC(Personal Computer,个人电脑),也可以是智能手机、平板电脑、便携计算机等终端设备。该车辆损伤分类模型的生成装置1至少包括存储器11、处理器12,通信总线13,以及网络接口14。In the present embodiment, the device 1 for generating a vehicle damage classification model may be a PC (Personal Computer), or may be a terminal device such as a smart phone, a tablet computer, or a portable computer. The vehicle damage classification model generating apparatus 1 includes at least a memory 11, a processor 12, a communication bus 13, and a network interface 14.
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是车辆损伤分类模型的生成装置1的内部存储单元,例如该车辆损伤分类模型的生成装置1的硬盘。存储器11在另一些实施例中也可以是车辆损伤分类模型的生成装置1的外部存储设备,例如车辆损伤分类模型的生成装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括车辆损伤分类模型的生成装置1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于车辆损伤分类模型的生成装置1的应用软件及各类数据,例如模型生成程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。The memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (for example, an SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the generating device 1 of the vehicle damage classification model, such as the hard disk of the generating device 1 of the vehicle damage classification model. The memory 11 may also be an external storage device of the vehicle damage classification model generating device 1 in other embodiments, such as a plug-in hard disk equipped on the generating device 1 of the vehicle damage classification model, and a smart memory card (Smart Media Card, SMC) ), Secure Digital (SD) card, Flash Card, etc. Further, the memory 11 may also include an internal storage unit of the generating device 1 of the vehicle damage classification model and an external storage device. The memory 11 can be used not only for storing application software and various types of data of the generation device 1 installed in the vehicle damage classification model, such as code of the model generation program 01, but also for temporarily storing data that has been output or is to be output.
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行模型生成程序01等。The processor 12, in some embodiments, may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data processing chip for running program code or processing stored in the memory 11. Data, such as execution model generation program 01 and the like.
通信总线13用于实现这些组件之间的连接通信。 Communication bus 13 is used to implement connection communication between these components.
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该装置1与其他电子设备之间生成通信连接。The network interface 14 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface), and is typically used to generate a communication connection between the device 1 and other electronic devices.
图1仅示出了具有组件11-14以及模型生成程序01的车辆损伤分类模型的生成装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。Figure 1 shows only the generating device 1 of the vehicle damage classification model with the components 11-14 and the model generation program 01, but it should be understood that not all of the illustrated components are required to be implemented, and alternative implementations may be more or more Less components.
可选地,该装置1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏 或显示单元,用于显示在车辆损伤分类模型的生成装置1中处理的信息以及用于显示可视化的用户界面。Optionally, the device 1 may further include a user interface, the user interface may include a display, an input unit such as a keyboard, and the optional user interface may further include a standard wired interface and a wireless interface. Optionally, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like. Among them, the display may also be appropriately referred to as a display screen or a display unit for displaying information processed in the vehicle damage classification model generating device 1 and a user interface for displaying visualization.
可选地,该装置1还可以包括触摸传感器。所述触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为阵列布置的多个传感器。该装置1的显示器的面积可以与所述触摸传感器的面积相同,也可以不同。可选地,将显示器与所述触摸传感器层叠设置,以形成触摸显示屏。该装置1基于触摸显示屏侦测用户触发的触控操作。Optionally, the device 1 may also comprise a touch sensor. The area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area. Further, the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like. Moreover, the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like. In addition, the touch sensor may be a single sensor or a plurality of sensors arranged in an array. The area of the display of the device 1 may be the same as or different from the area of the touch sensor. Optionally, a display is stacked with the touch sensor to form a touch display. The device 1 detects a user-triggered touch operation based on the touch display screen.
可选地,该装置1还可以包括摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等。其中,传感器比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,若该装置1为移动终端,环境光传感器可根据环境光线的明暗来调节显示屏的亮度,接近传感器可在移动终端移动到耳边时,关闭显示屏和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别移动终端姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;当然,移动终端还可配置陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。Optionally, the device 1 may further include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. Among them, sensors such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein if the device 1 is a mobile terminal, the ambient light sensor may adjust the brightness of the display screen according to the brightness of the ambient light, and the proximity sensor may move to the ear at the mobile terminal. Turn off the display and/or backlight. As a kind of motion sensor, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, Related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; of course, the mobile terminal can also be equipped with other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. No longer.
在图1所示的装置实施例中,存储器11中存储有模型生成程序;处理器12执行存储器11中存储的模型生成程序时实现如下步骤:In the apparatus embodiment shown in FIG. 1, a model generation program is stored in the memory 11; when the processor 12 executes the model generation program stored in the memory 11, the following steps are implemented:
为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库。A preset number of original car damage photos are prepared for each car damage part, and the original car damage photo is marked and preprocessed as a sample picture to construct a sample library.
在本实施例中,需要针对各个车损部位采集预设数量的原始车损照片,其中,车损部位包括但不限于:左前门、右前门、左叶子板、右叶子板、前保险杠、后保险杠等。原始车损照片可以从历史车辆定损文件中获取,同时从其中获取与原始车损照片对应的定损信息,根据定损信息对原始车损照片的损伤程度进行标注,例如,若原始车损照片对应的定损信息为完好无损,则将该原始车损照片标注为0;若原始车损照片对应的定损信息为需喷漆,则 将该原始车损照片标注为1;若原始车损照片对应的定损信息为需修复,则将该原始车损照片标注为2;若原始车损照片对应的定损信息为需更换,则将该原始车损照片标注为3。其中,预设数量可以根据需要设置,样本照片的数量越多,那么训练得到的分类模型的分类结果也越准确。In this embodiment, a preset number of original vehicle damage photos need to be collected for each vehicle damage part, wherein the vehicle damage parts include but are not limited to: a left front door, a right front door, a left fender, a right fender, a front bumper, Rear bumper, etc. The original car damage photo can be obtained from the historical vehicle damage file, and the damage information corresponding to the original car damage photo is obtained from it, and the damage degree of the original car damage photo is marked according to the loss information, for example, if the original car damage is If the damage information corresponding to the photo is intact, the original car damage photo is marked as 0; if the original damage information corresponds to the required damage information, the original car damage photo is marked as 1; If the damage information corresponding to the photo is to be repaired, the original car damage photo is marked as 2; if the original damage information corresponding to the original car damage photo is to be replaced, the original car damage photo is marked as 3. The preset number can be set as needed, and the more the number of sample photos, the more accurate the classification result of the trained classification model is.
在完成对照片的标注之后,对照片进行预处理后作为样本图片,具体地,按照预设尺寸对标注后的原始车损照片进行剪裁处理,生成第一样本图片,对第一样本图片进行数据增强处理,生成第二样本图片;基于第一样本图片和第二样本图片构建样本库。After the photo is marked, the photo is preprocessed and used as a sample image. Specifically, the original car damage photo after the annotation is trimmed according to the preset size, and the first sample image is generated, and the first sample image is generated. Performing data enhancement processing to generate a second sample picture; constructing a sample library based on the first sample picture and the second sample picture.
将收集的所有原始车损照片的尺寸统一剪裁为预设尺寸,在一些实施例中,预设尺寸可以是224x224,将剪裁处理后的照片作为第一样本图片,对第一样本进行数据增强处理,以增强样本的多样化,提高模型的精度。增强处理的手段包括随机剪裁、90度翻转以及增加高斯噪声等,其中,每一张第一样本图片经过上述多种增强处理手段的处理后,都可以得到多张图片。将得到的图片作为第二样本图片,基于第一样本图片和第二样本图片构建样本库。The size of all the collected original car damage photos is uniformly cut into a preset size. In some embodiments, the preset size may be 224×224, and the cropped processed photo is used as the first sample image to perform data on the first sample. Enhance processing to enhance sample diversification and improve model accuracy. The means for enhancing the processing includes random clipping, 90-degree inversion, and addition of Gaussian noise, etc., wherein each of the first sample pictures can be processed by the above various enhancement processing means to obtain a plurality of pictures. The obtained picture is used as a second sample picture, and the sample library is constructed based on the first sample picture and the second sample picture.
将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理。The sample picture in the sample library is used as input data of the vgg network, and the vgg network is pruned according to a preset pruning algorithm and a preset compression ratio.
用I n、W n表示vgg网络中第n层的卷积操作,其中:I n表示网络中第n层的输入,它有C个通道;W n代表一组卷积核大小为k*k的滤波器,该滤波器为第n层的滤波器。第n层的输出数据即为第n+1层的输入。 I n and W n represent the convolution operation of the nth layer in the vgg network, where: I n represents the input of the nth layer in the network, which has C channels; W n represents a set of convolution kernels with a size of k*k Filter, which is the filter of the nth layer. The output data of the nth layer is the input of the n+1th layer.
剪枝操作的目的是去除W n中一些不重要的滤波器,并且当W n中的某一个滤波器被去掉后,其在I n+1和W n+1中对应的通道也会被去除。因此,这一步的作用是从所有的通道中选择出要删除的通道的合集,具体地,采用贪心算法计算下述表达式的值最小时的通道的集合T,这个集合中的通道即为要删除的通道,在删除通道时,会连带删除与之对应的滤波器。 Pruning operation object is to remove some of the important W n filters, and when one of W n in the filter is removed, which I n + 1 and W n + 1 corresponding to the channel will be removed . Therefore, the function of this step is to select the collection of channels to be deleted from all the channels. Specifically, the greedy algorithm is used to calculate the set T of channels when the value of the following expression is the smallest, and the channel in this set is The deleted channel will delete the corresponding filter when deleting the channel.
Figure PCTCN2018102411-appb-000001
Figure PCTCN2018102411-appb-000001
其中,|T|=C×(1-r),
Figure PCTCN2018102411-appb-000002
C为通道的个数,x是输入的训练数据,m代表训练数据的个数,r为预先设置的模型的压缩率。计算得到的T是一个输入通道的子集。按照上述公式计算每一个卷积层要删除的通道的集合,进而按照计算结果对网络中的卷积层进行剪枝处理。剪枝后剩余 的通道数的数量取决于预先设置的压缩率。
Where |T|=C×(1-r),
Figure PCTCN2018102411-appb-000002
C is the number of channels, x is the input training data, m is the number of training data, and r is the compression ratio of the preset model. The calculated T is a subset of an input channel. According to the above formula, the set of channels to be deleted by each convolution layer is calculated, and then the convolution layer in the network is pruned according to the calculation result. The number of channels remaining after pruning depends on the preset compression ratio.
可选地,在对每一层做完剪枝之后,使用样本库中的数据在剪枝后的网络上迭代1-2次,以对网络进行微调和优化。在所有的卷积层都剪枝完成后,再次使用样本库中的数据在剪枝后的网络上迭代1-2次,进一步优化剪枝后的网络。Optionally, after pruning each layer, the data in the sample library is iterated 1-2 times on the pruned network to fine tune and optimize the network. After all the convolutional layers have been pruned, the data in the sample library is again iterated 1-2 times on the pruned network to further optimize the pruned network.
此外,vgg网络是一种卷积神经网络,而对于该卷积神经网络来说,90%的计算量在卷积层,因此对卷积层进行压缩操作,能够提高计算效率,压缩率可以根据用户需要设置,例如设置为50%,则剪枝后的网络中可以去除一半的通道。这种剪枝处理方式对于vgg网络的准确率基本上没有影响,通过该网络构建的分类模型,仍然能够保持较高的准确率。另外,由于全连接层层占据了网络较多的参数,因此,可以将所有的全连接层用一个全局平均池化层代替。In addition, the vgg network is a convolutional neural network, and for the convolutional neural network, 90% of the calculation amount is in the convolutional layer, so the compression operation on the convolutional layer can improve the calculation efficiency, and the compression ratio can be based on Users need to set, for example, set to 50%, then half of the channels can be removed from the network after pruning. This pruning method has basically no effect on the accuracy of the vgg network, and the classification model constructed by the network can still maintain a high accuracy. In addition, since the fully connected layer occupies more parameters of the network, all the fully connected layers can be replaced by a global average pooling layer.
基于剪枝处理后的vgg网络构建车辆损伤分类模型,并使用所述样本库中的样本图片训练所述车辆损伤分类模型,以确定该车辆损伤分类模型的模型参数。A vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
在完成vgg网络的剪枝处理后,基于剪枝后的vgg网络构建车辆损伤分类模型,该模型是一种基于卷积神经网络构建的分类模型。使用之前构建的样本库中的样本图片训练该分类模型。具体地,将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1,例如第一预设比例为80%,第二预设比例为20%。使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。即当使用验证集中对分类模型的训练结果进行验证时,若准确率小于预设的准确率,则重新按照第一预设比例和第二预设比例对样本库中的样本图片分为两个集合,重新对模型进行训练,按照这样的方式不断的迭代训练,直至训练得到的模型,在经验证集验证后得到的准确率达到预设准确率时,完成模型的训练,得到模型参数,生成最终的车辆损伤分类模型,可以应用于车辆定损,在使用时,将待定损的照片输入到该模型,输出结果为对应的损伤程度 的类别。本申请中经过了对该网络的剪枝处理,使得vgg网络按照一定的比例进行压缩,压缩后的网络在能够保证分类的准确度的同时,极大地减少了参数量,提高了计算速度,同时减小了模型运行需要占用的空间,可以实现在移动设备上通过卷积神经网络模型进行车辆定损。After the pruning process of the vgg network is completed, the vehicle damage classification model is constructed based on the pruned vgg network. The model is a classification model based on convolutional neural network. The classification model is trained using the sample pictures in the previously constructed sample library. Specifically, the sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1, for example, A preset ratio is 80%, and the second preset ratio is 20%. Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed. That is, when the training result of the classification model is verified by using the verification set, if the accuracy rate is less than the preset accuracy rate, the sample pictures in the sample library are further divided into two according to the first preset ratio and the second preset ratio. Collecting, retraining the model, and continuously iterating training in this way until the model obtained by the training, after the accuracy obtained by the verification set verification reaches the preset accuracy rate, the model training is completed, and the model parameters are generated and generated. The final vehicle damage classification model can be applied to vehicle damage. When in use, the photo to be determined is input to the model, and the output is the category of the corresponding damage degree. In the present application, the pruning process of the network is performed, so that the vgg network is compressed according to a certain ratio, and the compressed network can greatly reduce the parameter amount and improve the calculation speed while ensuring the accuracy of the classification. The space required for the operation of the model is reduced, and the vehicle loss can be realized by the convolutional neural network model on the mobile device.
本实施例提出的车辆损伤分类模型的生成装置,为各个车损部位准备预设数量的原始车损照片,对原始车损照片进行标注和预处理后作为样本图片构建样本库,将这些样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对该vgg网络进行剪枝处理,删除原始的vgg网络中的冗余通道,基于剪枝处理后的vgg网络构建车辆损伤分类模型,使用样本库中的样本图片训练构建的模型以获取模型参数,以生成最终的车辆损伤分类模型。本申请通过剪枝算法对vgg网络进行压缩处理,并且使得剪枝处理后的vgg网络相对于原始网络的压缩率达到预设压缩率,减小了网络占用的空间,同时,基于简化后的网络构建的车辆损伤分类模型在训练和测试上的速度都会加快,使得能够实现在移动设备上通过卷积神经网络模型进行车辆定损。The device for generating a vehicle damage classification model according to the present embodiment prepares a preset number of original vehicle damage photos for each vehicle damage portion, and labels and preprocesses the original vehicle damage photos to construct a sample library as sample images, and these sample images are taken. As the input data of the vgg network, the vgg network is pruned according to a preset pruning algorithm and a preset compression ratio, and the redundant channel in the original vgg network is deleted, and the vehicle damage is constructed based on the pgg-processed vgg network. A classification model that uses the sample images in the sample library to train the constructed model to obtain model parameters to generate a final vehicle damage classification model. The splicing algorithm performs compression processing on the vgg network, and the compression rate of the pgg-processed vgg network relative to the original network reaches a preset compression ratio, which reduces the space occupied by the network, and is based on the simplified network. The constructed vehicle damage classification model will speed up in training and testing, enabling vehicle loss determination through a convolutional neural network model on mobile devices.
可选地,在其他的实施例中,模型生成程序还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行以完成本申请,本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,用于描述模型生成程序在车辆损伤分类模型的生成装置中的执行过程。Alternatively, in other embodiments, the model generation program may also be divided into one or more modules, one or more modules being stored in the memory 11 and being processed by one or more processors (this embodiment is The processor 12) is executed to complete the present application. The module referred to in the present application refers to a series of computer program instruction segments capable of performing a specific function for describing the execution process of the model generation program in the vehicle damage classification model generating device.
例如,参照图2所示,为本申请车辆损伤分类模型的生成装置一实施例中的模型生成程序的程序模块示意图,该实施例中,模型生成程序可以被分割为样本构建模块10、网络压缩模块20和模型生成模块30,示例性地:For example, referring to FIG. 2, it is a schematic diagram of a program module of a model generation program in an embodiment of a device for generating a vehicle damage classification model of the present application. In this embodiment, the model generation program can be divided into a sample construction module 10 and network compression. Module 20 and model generation module 30, illustratively:
样本构建模块10用于:为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库;The sample building module 10 is configured to: prepare a preset number of original vehicle damage photos for each vehicle damage part, label and preprocess the original vehicle damage photo as a sample picture, and construct a sample library;
网络压缩模块20用于:将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理;The network compression module 20 is configured to: use the sample image in the sample library as input data of the vgg network, perform pruning processing on the vgg network according to a preset pruning algorithm and a preset compression ratio;
模型生成模块30用于:基于剪枝处理后的vgg网络构建车辆损伤分类模型,并使用所述样本库中的样本图片训练所述车辆损伤分类模型,以确定该车辆损伤分类模型的模型参数。The model generation module 30 is configured to: construct a vehicle damage classification model based on the peg-processed vgg network, and train the vehicle damage classification model using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
上述样本构建模块10、网络压缩模块20和模型生成模块30等程序模块 被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。The functions or operation steps of the above-mentioned sample construction module 10, the network compression module 20, and the model generation module 30 are substantially the same as those of the above embodiments, and are not described herein again.
此外,本申请还提供一种车辆损伤分类模型的生成方法。参照图3所示,为本申请车辆损伤分类模型的生成方法较佳实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。In addition, the present application also provides a method for generating a vehicle damage classification model. Referring to FIG. 3, it is a flowchart of a preferred embodiment of a method for generating a vehicle damage classification model of the present application. The method can be performed by a device that can be implemented by software and/or hardware.
在本实施例中,车辆损伤分类模型的生成方法包括:In this embodiment, the method for generating a vehicle damage classification model includes:
步骤S10,为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库。In step S10, a preset number of original car damage photos are prepared for each car damage part, and the original car damage photo is marked and preprocessed as a sample picture to construct a sample library.
在本实施例中,需要针对各个车损部位采集预设数量的原始车损照片,其中,车损部位包括但不限于:左前门、右前门、左叶子板、右叶子板、前保险杠、后保险杠等。原始车损照片可以从历史车辆定损文件中获取,同时从其中获取与原始车损照片对应的定损信息,根据定损信息对原始车损照片的损伤程度进行标注,例如,若原始车损照片对应的定损信息为完好无损,则将该原始车损照片标注为0;若原始车损照片对应的定损信息为需喷漆,则将该原始车损照片标注为1;若原始车损照片对应的定损信息为需修复,则将该原始车损照片标注为2;若原始车损照片对应的定损信息为需更换,则将该原始车损照片标注为3。其中,预设数量可以根据需要设置,样本照片的数量越多,那么训练得到的分类模型的分类结果也越准确。In this embodiment, a preset number of original vehicle damage photos need to be collected for each vehicle damage part, wherein the vehicle damage parts include but are not limited to: a left front door, a right front door, a left fender, a right fender, a front bumper, Rear bumper, etc. The original car damage photo can be obtained from the historical vehicle damage file, and the damage information corresponding to the original car damage photo is obtained from it, and the damage degree of the original car damage photo is marked according to the loss information, for example, if the original car damage is If the damage information corresponding to the photo is intact, the original car damage photo is marked as 0; if the original damage information corresponds to the required damage information, the original car damage photo is marked as 1; If the damage information corresponding to the photo is to be repaired, the original car damage photo is marked as 2; if the original damage information corresponding to the original car damage photo is to be replaced, the original car damage photo is marked as 3. The preset number can be set as needed, and the more the number of sample photos, the more accurate the classification result of the trained classification model is.
在完成对照片的标注之后,对照片进行预处理后作为样本图片,具体地,按照预设尺寸对标注后的原始车损照片进行剪裁处理,生成第一样本图片,对第一样本图片进行数据增强处理,生成第二样本图片;基于第一样本图片和第二样本图片构建样本库。After the photo is marked, the photo is preprocessed and used as a sample image. Specifically, the original car damage photo after the annotation is trimmed according to the preset size, and the first sample image is generated, and the first sample image is generated. Performing data enhancement processing to generate a second sample picture; constructing a sample library based on the first sample picture and the second sample picture.
将收集的所有原始车损照片的尺寸统一剪裁为预设尺寸,在一些实施例中,预设尺寸可以是224x224,将剪裁处理后的照片作为第一样本图片,对第一样本进行数据增强处理,以增强样本的多样化,提高模型的精度。增强处理的手段包括随机剪裁、90度翻转以及增加高斯噪声等,其中,每一张第一样本图片经过上述多种增强处理手段的处理后,都可以得到多张图片。将得到的图片作为第二样本图片,基于第一样本图片和第二样本图片构建样本库。The size of all the collected original car damage photos is uniformly cut into a preset size. In some embodiments, the preset size may be 224×224, and the cropped processed photo is used as the first sample image to perform data on the first sample. Enhance processing to enhance sample diversification and improve model accuracy. The means for enhancing the processing includes random clipping, 90-degree inversion, and addition of Gaussian noise, etc., wherein each of the first sample pictures can be processed by the above various enhancement processing means to obtain a plurality of pictures. The obtained picture is used as a second sample picture, and the sample library is constructed based on the first sample picture and the second sample picture.
步骤S20,将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理。Step S20: The sample picture in the sample library is used as the input data of the vgg network, and the vgg network is pruned according to a preset pruning algorithm and a preset compression ratio.
用I n、W n表示vgg网络中第n层的卷积操作,其中:I n表示网络中第n层的输入,它有C个通道;W n代表一组卷积核大小为k*k的滤波器,该滤波器为第n层的滤波器。第n层的输出数据即为第n+1层的输入。 I n and W n represent the convolution operation of the nth layer in the vgg network, where: I n represents the input of the nth layer in the network, which has C channels; W n represents a set of convolution kernels with a size of k*k Filter, which is the filter of the nth layer. The output data of the nth layer is the input of the n+1th layer.
剪枝操作的目的是去除W n中一些不重要的滤波器,并且当W n中的某一个滤波器被去掉后,其在I n+1和W n+1中对应的通道也会被去除。因此,这一步的作用是从所有的通道中选择出要删除的通道的合集,具体地,采用贪心算法计算下述表达式的值最小时的通道的集合T,这个集合中的通道即为要删除的通道,在删除通道时,会连带删除与之对应的滤波器。 Pruning operation object is to remove some of the important W n filters, and when one of W n in the filter is removed, which I n + 1 and W n + 1 corresponding to the channel will be removed . Therefore, the function of this step is to select the collection of channels to be deleted from all the channels. Specifically, the greedy algorithm is used to calculate the set T of channels when the value of the following expression is the smallest, and the channel in this set is The deleted channel will delete the corresponding filter when deleting the channel.
Figure PCTCN2018102411-appb-000003
Figure PCTCN2018102411-appb-000003
其中,|T|=C×(1-r),
Figure PCTCN2018102411-appb-000004
C为通道的个数,x是输入的训练数据,m代表训练数据的个数,r为预先设置的模型的压缩率。计算得到的T是一个输入通道的子集。按照上述公式计算每一个卷积层要删除的通道的集合,进而按照计算结果对网络中的卷积层进行剪枝处理。剪枝后剩余的通道数的数量取决于预先设置的压缩率。
Where |T|=C×(1-r),
Figure PCTCN2018102411-appb-000004
C is the number of channels, x is the input training data, m is the number of training data, and r is the compression ratio of the preset model. The calculated T is a subset of an input channel. According to the above formula, the set of channels to be deleted by each convolution layer is calculated, and then the convolution layer in the network is pruned according to the calculation result. The number of channels remaining after pruning depends on the preset compression ratio.
可选地,在对每一层做完剪枝之后,使用样本库中的数据在剪枝后的网络上迭代1-2次,以对网络进行微调和优化。在所有的卷积层都剪枝完成后,再次使用样本库中的数据在剪枝后的网络上迭代1-2次,进一步优化剪枝后的网络。Optionally, after pruning each layer, the data in the sample library is iterated 1-2 times on the pruned network to fine tune and optimize the network. After all the convolutional layers have been pruned, the data in the sample library is again iterated 1-2 times on the pruned network to further optimize the pruned network.
此外,vgg网络是一种卷积神经网络,而对于该卷积神经网络来说,90%的计算量在卷积层,因此对卷积层进行压缩操作,能够提高计算效率,压缩率可以根据用户需要设置,例如设置为50%,则剪枝后的网络中可以去除一半的通道。这种剪枝处理方式对于vgg网络的准确率基本上没有影响,通过该网络构建的分类模型,仍然能够保持较高的准确率。另外,由于全连接层层占据了网络较多的参数,因此,可以将所有的全连接层用一个全局平均池化层代替。In addition, the vgg network is a convolutional neural network, and for the convolutional neural network, 90% of the calculation amount is in the convolutional layer, so the compression operation on the convolutional layer can improve the calculation efficiency, and the compression ratio can be based on Users need to set, for example, set to 50%, then half of the channels can be removed from the network after pruning. This pruning method has basically no effect on the accuracy of the vgg network, and the classification model constructed by the network can still maintain a high accuracy. In addition, since the fully connected layer occupies more parameters of the network, all the fully connected layers can be replaced by a global average pooling layer.
步骤S30,基于剪枝处理后的vgg网络构建车辆损伤分类模型,并使用所述样本库中的样本图片训练所述车辆损伤分类模型,以确定该车辆损伤分类模型的模型参数。Step S30, constructing a vehicle damage classification model based on the peg-processed vgg network, and training the vehicle damage classification model using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
在完成vgg网络的剪枝处理后,基于剪枝后的vgg网络构建车辆损伤分 类模型,该模型是一种基于卷积神经网络构建的分类模型。使用之前构建的样本库中的样本图片训练该分类模型。具体地,将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1,例如第一预设比例为80%,第二预设比例为20%。使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。即当使用验证集中对分类模型的训练结果进行验证时,若准确率小于预设的准确率,则重新按照第一预设比例和第二预设比例对样本库中的样本图片分为两个集合,重新对模型进行训练,按照这样的方式不断的迭代训练,直至训练得到的模型,在经验证集验证后得到的准确率达到预设准确率时,完成模型的训练,得到模型参数,生成最终的车辆损伤分类模型,可以应用于车辆定损,在使用时,将待定损的照片输入到该模型,输出结果为对应的损伤程度的类别。本申请中经过了对该网络的剪枝处理,使得vgg网络按照一定的比例进行压缩,压缩后的网络在能够保证分类的准确度的同时,极大地减少了参数量,提高了计算速度,同时减小了模型运行需要占用的空间,可以实现在移动设备上通过卷积神经网络模型进行车辆定损。After the pruning process of the vgg network is completed, the vehicle damage classification model is constructed based on the pegged vgg network. The model is a classification model based on convolutional neural network. The classification model is trained using the sample pictures in the previously constructed sample library. Specifically, the sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1, for example, A preset ratio is 80%, and the second preset ratio is 20%. Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed. That is, when the training result of the classification model is verified by using the verification set, if the accuracy rate is less than the preset accuracy rate, the sample pictures in the sample library are further divided into two according to the first preset ratio and the second preset ratio. Collecting, retraining the model, and continuously iterating training in this way until the model obtained by the training, after the accuracy obtained by the verification set verification reaches the preset accuracy rate, the model training is completed, and the model parameters are generated and generated. The final vehicle damage classification model can be applied to vehicle damage. When in use, the photo to be determined is input to the model, and the output is the category of the corresponding damage degree. In the present application, the pruning process of the network is performed, so that the vgg network is compressed according to a certain ratio, and the compressed network can greatly reduce the parameter amount and improve the calculation speed while ensuring the accuracy of the classification. The space required for the operation of the model is reduced, and the vehicle loss can be realized by the convolutional neural network model on the mobile device.
本实施例提出的车辆损伤分类模型的生成方法,为各个车损部位准备预设数量的原始车损照片,对原始车损照片进行标注和预处理后作为样本图片构建样本库,将这些样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对该vgg网络进行剪枝处理,删除原始的vgg网络中的冗余通道,基于剪枝处理后的vgg网络构建车辆损伤分类模型,使用样本库中的样本图片训练构建的模型以获取模型参数,以生成最终的车辆损伤分类模型。本申请通过剪枝算法对vgg网络进行压缩处理,并且使得剪枝处理后的vgg网络相对于原始网络的压缩率达到预设压缩率,减小了网络占用的空间,同时,基于简化后的网络构建的车辆损伤分类模型在训练和测试上的速度都会加快,使得能够实现在移动设备上通过卷积神经网络模型进行车辆定损。The method for generating a vehicle damage classification model proposed in this embodiment prepares a preset number of original vehicle damage photos for each vehicle damage portion, and labels and preprocesses the original vehicle damage photos to construct a sample library as sample images, and these sample images are taken. As the input data of the vgg network, the vgg network is pruned according to a preset pruning algorithm and a preset compression ratio, and the redundant channel in the original vgg network is deleted, and the vehicle damage is constructed based on the pgg-processed vgg network. A classification model that uses the sample images in the sample library to train the constructed model to obtain model parameters to generate a final vehicle damage classification model. The splicing algorithm performs compression processing on the vgg network, and the compression rate of the pgg-processed vgg network relative to the original network reaches a preset compression ratio, which reduces the space occupied by the network, and is based on the simplified network. The constructed vehicle damage classification model will speed up in training and testing, enabling vehicle loss determination through a convolutional neural network model on mobile devices.
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读 存储介质上存储有模型生成程序,所述模型生成程序可被一个或多个处理器执行,以实现如下操作:In addition, the embodiment of the present application further provides a computer readable storage medium, where the model readable program is stored, and the model generating program may be executed by one or more processors to implement the following operations:
为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库;Preparing a preset number of original vehicle damage photos for each vehicle damage part, labeling and pre-processing the original vehicle damage photos as a sample picture, and constructing a sample library;
将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理;Using the sample picture in the sample library as input data of the vgg network, performing pruning processing on the vgg network according to a preset pruning algorithm and a preset compression ratio;
基于剪枝处理后的vgg网络构建车辆损伤分类模型,并使用所述样本库中的样本图片训练所述车辆损伤分类模型,以确定该车辆损伤分类模型的模型参数。A vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
本申请计算机可读存储介质具体实施方式与上述车辆损伤分类模型的生成装置和方法各实施例基本相同,在此不作累述。The specific embodiment of the computer readable storage medium of the present application is substantially the same as the embodiment of the apparatus and method for generating a vehicle damage classification model, and will not be described herein.
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It should be noted that the foregoing serial numbers of the embodiments of the present application are merely for the description, and do not represent the advantages and disadvantages of the embodiments. And the terms "including", "comprising", or any other variations thereof are intended to encompass a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a plurality of elements includes not only those elements but also Other elements listed, or elements that are inherent to such a process, device, item, or method. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, the device, the item, or the method that comprises the element.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM as described above). , a disk, an optical disk, including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of this application.

Claims (20)

  1. 一种车辆损伤分类模型的生成装置,其特征在于,所述装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的模型生成程序,所述模型生成程序被所述处理器执行时实现如下步骤:A device for generating a vehicle damage classification model, comprising: a memory and a processor, wherein the memory stores a model generation program executable on the processor, wherein the model generation program is The processor implements the following steps when it executes:
    为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库;Preparing a preset number of original vehicle damage photos for each vehicle damage part, labeling and pre-processing the original vehicle damage photos as a sample picture, and constructing a sample library;
    将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理;Using the sample picture in the sample library as input data of the vgg network, performing pruning processing on the vgg network according to a preset pruning algorithm and a preset compression ratio;
    基于剪枝处理后的vgg网络构建车辆损伤分类模型,并使用所述样本库中的样本图片训练所述车辆损伤分类模型,以确定该车辆损伤分类模型的模型参数。A vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
  2. 如权利要求1所述的车辆损伤分类模型的生成装置,其特征在于,所述将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理的步骤包括:The apparatus for generating a vehicle damage classification model according to claim 1, wherein the sample image in the sample library is used as input data of a vgg network according to a preset pruning algorithm and a preset compression ratio. The step of performing the pruning process on the vgg network includes:
    将所述样本库中的样本图片作为vgg网络的输入数据,在vgg网络进行卷积计算的过程中,根据贪心算法和预设压缩率计算每一层网络要删除的通道的集合;Using the sample picture in the sample library as the input data of the vgg network, in the process of convolution calculation of the vgg network, calculating a set of channels to be deleted in each layer network according to the greedy algorithm and the preset compression ratio;
    按照计算得到的通道的集合将vgg网络中的卷积层进行剪枝处理。The convolution layer in the vgg network is pruned according to the calculated set of channels.
  3. 如权利要求2所述的车辆损伤分类模型的生成装置,其特征在于,所述根据贪心算法和预设压缩率计算每一层网络要删除的通道的集合的步骤包括:The apparatus for generating a vehicle damage classification model according to claim 2, wherein the step of calculating a set of channels to be deleted in each layer network according to the greedy algorithm and the preset compression ratio comprises:
    根据公式
    Figure PCTCN2018102411-appb-100001
    计算每一层网络要删除的通道的集合T,其中,|T|=C×(1-r),
    Figure PCTCN2018102411-appb-100002
    C为通道的个数,x是输入的训练数据,m代表训练数据的个数,r为预设压缩率。
    According to the formula
    Figure PCTCN2018102411-appb-100001
    Calculate a set T of channels to be deleted for each layer of the network, where |T|=C×(1-r),
    Figure PCTCN2018102411-appb-100002
    C is the number of channels, x is the input training data, m is the number of training data, and r is the preset compression ratio.
  4. 如权利要求1至3中任一项所述的车辆损伤分类模型的生成装置,其特征在于,所述为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库的步骤包括:The apparatus for generating a vehicle damage classification model according to any one of claims 1 to 3, wherein the predetermined number of original vehicle damage photos are prepared for each vehicle damage portion, and the original vehicle damage photograph is performed. After labeling and pre-processing as a sample image, the steps to build a sample library include:
    为各个车损部位准备预设数量的原始车损照片;Preparing a preset number of original car damage photos for each car damage location;
    获取原始车损照片对应的定损信息,根据定损信息对所述原始车损照片的损伤程度进行标注;Obtaining the loss information corresponding to the original car damage photo, and marking the damage degree of the original car damage photo according to the fixed loss information;
    按照预设尺寸对标注后的原始车损照片进行剪裁处理,生成第一样本图 片,对所述第一样本图片进行数据增强处理,生成第二样本图片;Performing a trimming process on the original car damage photo according to the preset size, generating a first sample image, performing data enhancement processing on the first sample image, and generating a second sample image;
    基于所述第一样本图片和所述第二样本图片构建样本库。A sample library is constructed based on the first sample picture and the second sample picture.
  5. 如权利要求1所述的车辆损伤分类模型的生成装置,其特征在于,所述使用所述样本库中的样本图片训练所述车辆损伤分类模型的步骤包括:The apparatus for generating a vehicle damage classification model according to claim 1, wherein the step of training the vehicle damage classification model using sample pictures in the sample library comprises:
    将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1;The sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1;
    使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed.
  6. 如权利要求2所述的车辆损伤分类模型的生成装置,其特征在于,所述使用所述样本库中的样本图片训练所述车辆损伤分类模型的步骤包括:The apparatus for generating a vehicle damage classification model according to claim 2, wherein the step of training the vehicle damage classification model using the sample pictures in the sample library comprises:
    将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1;The sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1;
    使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed.
  7. 如权利要求3所述的车辆损伤分类模型的生成装置,其特征在于,所述使用所述样本库中的样本图片训练所述车辆损伤分类模型的步骤包括:The apparatus for generating a vehicle damage classification model according to claim 3, wherein the step of training the vehicle damage classification model using sample pictures in the sample library comprises:
    将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1;The sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1;
    使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed.
  8. 一种车辆损伤分类模型的生成方法,其特征在于,所述方法包括:A method for generating a vehicle damage classification model, the method comprising:
    为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库;Preparing a preset number of original vehicle damage photos for each vehicle damage part, labeling and pre-processing the original vehicle damage photos as a sample picture, and constructing a sample library;
    将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝 算法和预设压缩率对所述vgg网络进行剪枝处理;Using the sample picture in the sample library as input data of the vgg network, performing pruning processing on the vgg network according to a preset pruning algorithm and a preset compression ratio;
    基于剪枝处理后的vgg网络构建车辆损伤分类模型,并使用所述样本库中的样本图片训练所述车辆损伤分类模型,以确定该车辆损伤分类模型的模型参数。A vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
  9. 如权利要求8所述的车辆损伤分类模型的生成方法,其特征在于,所述将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理的步骤包括:The method for generating a vehicle damage classification model according to claim 8, wherein the sample image in the sample library is used as input data of a vgg network according to a preset pruning algorithm and a preset compression ratio. The step of performing the pruning process on the vgg network includes:
    将所述样本库中的样本图片作为vgg网络的输入数据,在vgg网络进行卷积计算的过程中,根据贪心算法和预设压缩率计算每一层网络要删除的通道的集合;Using the sample picture in the sample library as the input data of the vgg network, in the process of convolution calculation of the vgg network, calculating a set of channels to be deleted in each layer network according to the greedy algorithm and the preset compression ratio;
    按照计算得到的通道的集合将vgg网络中的卷积层进行剪枝处理。The convolution layer in the vgg network is pruned according to the calculated set of channels.
  10. 如权利要求9所述的车辆损伤分类模型的生成方法,其特征在于,所述根据贪心算法和预设压缩率计算每一层网络要删除的通道的集合的步骤包括:The method for generating a vehicle damage classification model according to claim 9, wherein the step of calculating a set of channels to be deleted in each layer network according to a greedy algorithm and a preset compression ratio comprises:
    根据公式
    Figure PCTCN2018102411-appb-100003
    计算每一层网络要删除的通道的集合T,其中,|T|=C×(1-r),
    Figure PCTCN2018102411-appb-100004
    C为通道的个数,x是输入的训练数据,m代表训练数据的个数,r为预设压缩率。
    According to the formula
    Figure PCTCN2018102411-appb-100003
    Calculate a set T of channels to be deleted for each layer of the network, where |T|=C×(1-r),
    Figure PCTCN2018102411-appb-100004
    C is the number of channels, x is the input training data, m is the number of training data, and r is the preset compression ratio.
  11. 如权利要求8至10中任一项所述的车辆损伤分类模型的生成方法,其特征在于,所述为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库的步骤包括:The method for generating a vehicle damage classification model according to any one of claims 8 to 10, wherein the predetermined number of original vehicle damage photos are prepared for each vehicle damage portion, and the original vehicle damage photograph is performed. After labeling and pre-processing as a sample image, the steps to build a sample library include:
    为各个车损部位准备预设数量的原始车损照片;Preparing a preset number of original car damage photos for each car damage location;
    获取原始车损照片对应的定损信息,根据定损信息对所述原始车损照片的损伤程度进行标注;Obtaining the loss information corresponding to the original car damage photo, and marking the damage degree of the original car damage photo according to the fixed loss information;
    按照预设尺寸对标注后的原始车损照片进行剪裁处理,生成第一样本图片,对所述第一样本图片进行数据增强处理,生成第二样本图片;Performing a cropping process on the original car damage photo according to the preset size, generating a first sample image, performing data enhancement processing on the first sample image, and generating a second sample image;
    基于所述第一样本图片和所述第二样本图片构建样本库。A sample library is constructed based on the first sample picture and the second sample picture.
  12. 如权利要求8所述的车辆损伤分类模型的生成方法,其特征在于,所述使用所述样本库中的样本图片训练所述车辆损伤分类模型的步骤包括:The method for generating a vehicle damage classification model according to claim 8, wherein the step of training the vehicle damage classification model using sample pictures in the sample library comprises:
    将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1;The sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1;
    使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed.
  13. 如权利要求9所述的车辆损伤分类模型的生成方法,其特征在于,所述使用所述样本库中的样本图片训练所述车辆损伤分类模型的步骤包括:The method for generating a vehicle damage classification model according to claim 9, wherein the step of training the vehicle damage classification model using sample pictures in the sample library comprises:
    将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1;The sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1;
    使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed.
  14. 如权利要求10所述的车辆损伤分类模型的生成方法,其特征在于,所述使用所述样本库中的样本图片训练所述车辆损伤分类模型的步骤包括:The method for generating a vehicle damage classification model according to claim 10, wherein the step of training the vehicle damage classification model using sample pictures in the sample library comprises:
    将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1;The sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1;
    使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed.
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有模型生成程序,所述模型生成程序可被一个或者多个处理器执行,以实现如下步骤:A computer readable storage medium, wherein the computer readable storage medium stores a model generation program, the model generation program being executable by one or more processors to implement the following steps:
    为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库;Preparing a preset number of original vehicle damage photos for each vehicle damage part, labeling and pre-processing the original vehicle damage photos as a sample picture, and constructing a sample library;
    将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理;Using the sample picture in the sample library as input data of the vgg network, performing pruning processing on the vgg network according to a preset pruning algorithm and a preset compression ratio;
    基于剪枝处理后的vgg网络构建车辆损伤分类模型,并使用所述样本库中的样本图片训练所述车辆损伤分类模型,以确定该车辆损伤分类模型的模型参数。A vehicle damage classification model is constructed based on the peg-processed vgg network, and the vehicle damage classification model is trained using the sample pictures in the sample library to determine model parameters of the vehicle damage classification model.
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,所述将所述样本库中的样本图片作为vgg网络的输入数据,按照预设的剪枝算法和预设压缩率对所述vgg网络进行剪枝处理的步骤包括:The computer readable storage medium according to claim 15, wherein said sample picture in said sample library is used as input data of a vgg network according to a preset pruning algorithm and a preset compression ratio The steps of the vgg network for pruning include:
    将所述样本库中的样本图片作为vgg网络的输入数据,在vgg网络进行卷积计算的过程中,根据贪心算法和预设压缩率计算每一层网络要删除的通道的集合;Using the sample picture in the sample library as the input data of the vgg network, in the process of convolution calculation of the vgg network, calculating a set of channels to be deleted in each layer network according to the greedy algorithm and the preset compression ratio;
    按照计算得到的通道的集合将vgg网络中的卷积层进行剪枝处理。The convolution layer in the vgg network is pruned according to the calculated set of channels.
  17. 如权利要求16所述的计算机可读存储介质,其特征在于,所述根据贪心算法和预设压缩率计算每一层网络要删除的通道的集合的步骤包括:The computer readable storage medium according to claim 16, wherein the step of calculating a set of channels to be deleted in each layer network according to a greedy algorithm and a preset compression ratio comprises:
    根据公式
    Figure PCTCN2018102411-appb-100005
    计算每一层网络要删除的通道的集合T,其中,|T|=C×(1-r),
    Figure PCTCN2018102411-appb-100006
    C为通道的个数,x是输入的训练数据,m代表训练数据的个数,r为预设压缩率。
    According to the formula
    Figure PCTCN2018102411-appb-100005
    Calculate a set T of channels to be deleted for each layer of the network, where |T|=C×(1-r),
    Figure PCTCN2018102411-appb-100006
    C is the number of channels, x is the input training data, m is the number of training data, and r is the preset compression ratio.
  18. 如权利要求15至17中任一项所述的计算机可读存储介质,其特征在于,所述为各个车损部位准备预设数量的原始车损照片,对所述原始车损照片进行标注和预处理后作为样本图片,构建样本库的步骤包括:The computer readable storage medium according to any one of claims 15 to 17, wherein the predetermined number of original vehicle damage photos are prepared for each vehicle damage portion, and the original vehicle damage photograph is marked and After preprocessing, as a sample image, the steps of constructing the sample library include:
    为各个车损部位准备预设数量的原始车损照片;Preparing a preset number of original car damage photos for each car damage location;
    获取原始车损照片对应的定损信息,根据定损信息对所述原始车损照片的损伤程度进行标注;Obtaining the loss information corresponding to the original car damage photo, and marking the damage degree of the original car damage photo according to the fixed loss information;
    按照预设尺寸对标注后的原始车损照片进行剪裁处理,生成第一样本图片,对所述第一样本图片进行数据增强处理,生成第二样本图片;Performing a cropping process on the original car damage photo according to the preset size, generating a first sample image, performing data enhancement processing on the first sample image, and generating a second sample image;
    基于所述第一样本图片和所述第二样本图片构建样本库。A sample library is constructed based on the first sample picture and the second sample picture.
  19. 如权利要求15所述的计算机可读存储介质,其特征在于,所述使用所述样本库中的样本图片训练所述车辆损伤分类模型的步骤包括:The computer readable storage medium of claim 15 wherein said step of training said vehicle damage classification model using sample pictures in said sample library comprises:
    将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1;The sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1;
    使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed.
  20. 如权利要求16或17所述的计算机可读存储介质,其特征在于,所 述使用所述样本库中的样本图片训练所述车辆损伤分类模型的步骤包括:A computer readable storage medium according to claim 16 or claim 17, wherein said step of training said vehicle damage classification model using sample pictures in said sample library comprises:
    将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集,其中,第一预设比例和第二预设比例之和为1;The sample picture in the sample library is divided into a training set of a first preset ratio and a verification set of a second preset ratio, wherein a sum of the first preset ratio and the second preset ratio is 1;
    使用所述训练集训练所述车辆损伤分类模型,使用所述验证集对训练得到的车辆损伤分类模型的准确率进行验证,其中,若准确率大于或者等于预设准确率,则训练结束,若准确率小于所述预设准确率,则重新执行将样本库中的样本图片分为第一预设比例的训练集和第二预设比例的验证集的步骤。Using the training set to train the vehicle damage classification model, and verifying the accuracy of the trained vehicle damage classification model by using the verification set, wherein if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, if If the accuracy is less than the preset accuracy, the step of dividing the sample picture in the sample library into the training set of the first preset ratio and the verification set of the second preset ratio is re-executed.
PCT/CN2018/102411 2018-04-26 2018-08-27 Apparatus and method for generating vehicle damage classification model, and computer readable storage medium WO2019205391A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810388000.1 2018-04-26
CN201810388000.1A CN108764046A (en) 2018-04-26 2018-04-26 Generating means, method and the computer readable storage medium of vehicle damage disaggregated model

Publications (1)

Publication Number Publication Date
WO2019205391A1 true WO2019205391A1 (en) 2019-10-31

Family

ID=64012083

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/102411 WO2019205391A1 (en) 2018-04-26 2018-08-27 Apparatus and method for generating vehicle damage classification model, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN108764046A (en)
WO (1) WO2019205391A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969121A (en) * 2019-11-29 2020-04-07 长沙理工大学 High-resolution radar target recognition algorithm based on deep learning
CN111311540A (en) * 2020-01-13 2020-06-19 平安科技(深圳)有限公司 Vehicle damage assessment method and device, computer equipment and storage medium
CN111401360A (en) * 2020-03-02 2020-07-10 杭州雄迈集成电路技术股份有限公司 Method and system for optimizing license plate detection model and license plate detection method and system
CN111553169A (en) * 2020-06-25 2020-08-18 北京百度网讯科技有限公司 Pruning method and device of semantic understanding model, electronic equipment and storage medium
CN111652209A (en) * 2020-04-30 2020-09-11 平安科技(深圳)有限公司 Damage detection method, device, electronic apparatus, and medium
CN111832466A (en) * 2020-07-08 2020-10-27 上海东普信息科技有限公司 Violent sorting identification method, device, equipment and storage medium based on VGG network
CN111885146A (en) * 2020-07-21 2020-11-03 合肥学院 Industrial data cloud service platform data transmission method for new energy automobile drive motor assembly production line
CN113408561A (en) * 2020-03-17 2021-09-17 北京京东乾石科技有限公司 Model generation method, target detection method, device, equipment and storage medium
CN113554084A (en) * 2021-07-16 2021-10-26 华侨大学 Vehicle re-identification model compression method and system based on pruning and light-weight convolution
CN113901904A (en) * 2021-09-29 2022-01-07 北京百度网讯科技有限公司 Image processing method, face recognition model training method, device and equipment

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635110A (en) * 2018-11-30 2019-04-16 北京百度网讯科技有限公司 Data processing method, device, equipment and computer readable storage medium
CN111291882A (en) * 2018-12-06 2020-06-16 北京百度网讯科技有限公司 Model conversion method, device, equipment and computer storage medium
CN110378254B (en) * 2019-07-03 2022-04-19 中科软科技股份有限公司 Method and system for identifying vehicle damage image modification trace, electronic device and storage medium
CN112884142B (en) * 2019-11-29 2022-11-22 北京市商汤科技开发有限公司 Neural network training method, target detection method, device, equipment and storage medium
CN111666973B (en) * 2020-04-29 2024-04-09 平安科技(深圳)有限公司 Vehicle damage picture processing method and device, computer equipment and storage medium
CN111553480B (en) * 2020-07-10 2021-01-01 腾讯科技(深圳)有限公司 Image data processing method and device, computer readable medium and electronic equipment
CN111899204B (en) * 2020-07-30 2024-04-09 平安科技(深圳)有限公司 Vehicle loss detection data synthesis method, device and storage medium
CN112465018B (en) * 2020-11-26 2024-02-02 深源恒际科技有限公司 Intelligent screenshot method and system of vehicle video damage assessment system based on deep learning
CN113516163B (en) * 2021-04-26 2024-03-12 合肥市正茂科技有限公司 Vehicle classification model compression method, device and storage medium based on network pruning
CN114241398A (en) * 2022-02-23 2022-03-25 深圳壹账通科技服务有限公司 Vehicle damage assessment method, device, equipment and storage medium based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184313A (en) * 2015-08-24 2015-12-23 小米科技有限责任公司 Classification model construction method and device
CN106355248A (en) * 2016-08-26 2017-01-25 深圳先进技术研究院 Deep convolution neural network training method and device
CN107194398A (en) * 2017-05-10 2017-09-22 平安科技(深圳)有限公司 Car damages recognition methods and the system at position
CN107358596A (en) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device, electronic equipment and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780048A (en) * 2016-11-28 2017-05-31 中国平安财产保险股份有限公司 A kind of self-service Claims Resolution method of intelligent vehicle insurance, self-service Claims Resolution apparatus and system
CN107578453B (en) * 2017-10-18 2019-11-01 北京旷视科技有限公司 Compressed image processing method, apparatus, electronic equipment and computer-readable medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184313A (en) * 2015-08-24 2015-12-23 小米科技有限责任公司 Classification model construction method and device
CN106355248A (en) * 2016-08-26 2017-01-25 深圳先进技术研究院 Deep convolution neural network training method and device
CN107358596A (en) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device, electronic equipment and system
CN107194398A (en) * 2017-05-10 2017-09-22 平安科技(深圳)有限公司 Car damages recognition methods and the system at position

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969121A (en) * 2019-11-29 2020-04-07 长沙理工大学 High-resolution radar target recognition algorithm based on deep learning
CN111311540A (en) * 2020-01-13 2020-06-19 平安科技(深圳)有限公司 Vehicle damage assessment method and device, computer equipment and storage medium
CN111401360B (en) * 2020-03-02 2023-06-20 杭州雄迈集成电路技术股份有限公司 Method and system for optimizing license plate detection model, license plate detection method and system
CN111401360A (en) * 2020-03-02 2020-07-10 杭州雄迈集成电路技术股份有限公司 Method and system for optimizing license plate detection model and license plate detection method and system
CN113408561A (en) * 2020-03-17 2021-09-17 北京京东乾石科技有限公司 Model generation method, target detection method, device, equipment and storage medium
CN111652209A (en) * 2020-04-30 2020-09-11 平安科技(深圳)有限公司 Damage detection method, device, electronic apparatus, and medium
CN111553169A (en) * 2020-06-25 2020-08-18 北京百度网讯科技有限公司 Pruning method and device of semantic understanding model, electronic equipment and storage medium
CN111553169B (en) * 2020-06-25 2023-08-25 北京百度网讯科技有限公司 Pruning method and device of semantic understanding model, electronic equipment and storage medium
CN111832466A (en) * 2020-07-08 2020-10-27 上海东普信息科技有限公司 Violent sorting identification method, device, equipment and storage medium based on VGG network
CN111885146A (en) * 2020-07-21 2020-11-03 合肥学院 Industrial data cloud service platform data transmission method for new energy automobile drive motor assembly production line
CN113554084A (en) * 2021-07-16 2021-10-26 华侨大学 Vehicle re-identification model compression method and system based on pruning and light-weight convolution
CN113554084B (en) * 2021-07-16 2024-03-01 华侨大学 Vehicle re-identification model compression method and system based on pruning and light convolution
CN113901904A (en) * 2021-09-29 2022-01-07 北京百度网讯科技有限公司 Image processing method, face recognition model training method, device and equipment

Also Published As

Publication number Publication date
CN108764046A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
WO2019205391A1 (en) Apparatus and method for generating vehicle damage classification model, and computer readable storage medium
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
WO2019205376A1 (en) Vehicle damage determination method, server, and storage medium
CN108009543B (en) License plate recognition method and device
CN109815843B (en) Image processing method and related product
WO2018205467A1 (en) Automobile damage part recognition method, system and electronic device and storage medium
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
WO2019174130A1 (en) Bill recognition method, server, and computer readable storage medium
WO2019109526A1 (en) Method and device for age recognition of face image, storage medium
CN109960742B (en) Local information searching method and device
CN110008997B (en) Image texture similarity recognition method, device and computer readable storage medium
CN110033018B (en) Graph similarity judging method and device and computer readable storage medium
CN108491866B (en) Pornographic picture identification method, electronic device and readable storage medium
KR20190021187A (en) Vehicle license plate classification methods, systems, electronic devices and media based on deep running
US11830103B2 (en) Method, apparatus, and computer program product for training a signature encoding module and a query processing module using augmented data
CN107832794B (en) Convolutional neural network generation method, vehicle system identification method and computing device
CN112101305B (en) Multi-path image processing method and device and electronic equipment
CN111914775B (en) Living body detection method, living body detection device, electronic equipment and storage medium
WO2021114612A1 (en) Target re-identification method and apparatus, computer device, and storage medium
WO2019056503A1 (en) Store monitoring evaluation method, device and storage medium
CN111595850A (en) Slice defect detection method, electronic device and readable storage medium
CN111160288A (en) Gesture key point detection method and device, computer equipment and storage medium
WO2019085338A1 (en) Electronic apparatus, image-based age classification method and system, and storage medium
CN112464890A (en) Face recognition control method, device, equipment and storage medium
CN114550051A (en) Vehicle loss detection method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18916252

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 08.02.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18916252

Country of ref document: EP

Kind code of ref document: A1